The advent of containers and microservices has brought a paradigm shift in how applications are deployed in the cloud. Since its inception in 2014, Kubernetes has emerged as the preferred tool for container orchestration in modern cloud-native applications.

Progressive Delivery

The traditional approach of releasing new features or updates in one go can sometimes lead to unexpected bugs or issues that may have gone unnoticed during testing. To mitigate these risks, progressive delivery offers a more controlled and incremental approach to rolling out new features. Developers can proactively identify issues before they affect a larger audience by gradually and progressively releasing new features to a smaller subset of users. This enables them to make the necessary modifications or fixes before deploying the feature to a broader user base, reducing the risk of introducing bugs or other issues.

Blue-Green Deployment

Blue-green deployment is a software deployment approach designed to enable seamless updates of an application. It involves the creation of two indistinguishable environments: one for the existing version of the application and another for the upcoming version. This technique allows developers to test and validate the new version of the application in a separate environment while ensuring that the current version remains operational for end-users without any impact on the live or operational system.

Once the new version is fully tested, traffic can be effortlessly switched from the old version to the new version. This method reduces downtime, mitigates the risk of introducing bugs or other issues, and guarantees end-users a secure and dependable deployment process. Furthermore, if any issues arise, traffic can quickly revert to the previous version, providing an extra layer of protection to the deployment process.

While it offers many benefits, there are also some potential drawbacks to consider with blue-green deployments. One of the main drawbacks is the additional infrastructure required to set up and maintain two identical environments, which can increase costs and complexity. Furthermore, any configuration or compatibility issues between the two environments can result in additional time and effort required to resolve them. Overall, while blue-green deployments are a powerful technique for seamless software updates, it's important to carefully consider your application's specific requirements and limitations before deciding if it's the right approach for you.

Blue-Green Deployment

Argo Rollouts

Argo Rollouts is a Kubernetes-native tool that enables developers to perform automated blue-green as well as canary deployments. Canary deployment is another prominent deployment strategy that allows developers to gradually release new versions of their applications to a small group of users. With Argo Rollouts, developers can gradually release new versions of their applications to a small subset of users before releasing them to a larger audience. This helps reduce the risk of issues or bugs affecting many users.

Argo Rollouts also offers several features, such as canary analysis, custom metrics, and automated rollbacks for problem resolution. Additionally, its user-friendly dashboard simplifies the oversight and management of deployment operations, thus giving developers improved control and transparency of their applications.

In this tutorial, you will deploy a Nodejs application on Civo Kubernetes with GitHub Actions and Argo Rollouts. You will also use Argo CD to follow the GitOps approach for continuous delivery of applications on Kubernetes. To learn more about ArgoCD and GitOps concepts, you can refer to my previous tutorial on Deploying Knative Serverless with ArgoCD.

Prerequisites

To follow along with this tutorial, you will need a few things first:

After completing all the prerequisites, you are ready to proceed to the next section.

Cloning the Node.js application

In this tutorial, our main focus is deploying the Kubernetes application. Therefore, you can directly clone the Nodejs application to your GitHub and continue with the rest of the process.

To clone the project, run the following:

git clone https://github.com/Lucifergene/civo-argo-rollouts-tutorial.git

There are 2 branches in this repository:

  • main branch: This branch contains only the Nodejs Application code
  • deployment branch: This branch contains the application code along with all YAML files that we will create in this tutorial.

If you are following this tutorial, then check out to the main branch.

You can run the application locally by first installing the dependencies. In the project’s root, type:

npm install

Then run the application with the command:

node app.js

The application should now be running at the address http://localhost:1337.

Containerizing the NodeJS application

To deploy the application on Kubernetes, you must first containerize it with any container runtime tool. In this tutorial, we will be using Docker to containerize the application.

Create a new file in the project's root directory and name it Dockerfile.

Copy the following content in the file:

# Set the base image to use for subsequent instructions
FROM node:alpine
# Set the working directory for any subsequent ADD, COPY, CMD, ENTRYPOINT,
# or RUN instructions that follow it in the Dockerfile
WORKDIR /usr/src/app
# Copy files or folders from source to the dest path in the image's filesystem.
COPY package.json /usr/src/app/
COPY . /usr/src/app/
# Execute any commands on top of the current image as a new layer and commit the results.
RUN npm install --production
# Define the network ports that this container will listen to at runtime.
EXPOSE 1337
# Configure the container to be run as an executable.
ENTRYPOINT ["npm", "start"]

To build and tag the container locally, you can type:

docker build -t civo-argo-rollouts-tutorial:latest .

Confirm that the image was successfully created by running this command from your terminal:

docker images

Then run the container with the command:

docker run -it -p 1337:1337 civo-argo-rollouts-tutorial:latest

The application should now be up and running and accessible with your web browser at the address http://127.0.0.1:1337.

Commit and push the changes to your fork of the GitHub repository.

Configuring Kubernetes manifests

Create a directory named manifests in the project's root directory.

Then, create the following files within the newly created directory:

  • namespace.yaml
  • rollout.yaml
  • service-active.yaml
  • service-preview.yaml
  • kustomization.yaml

In Kubernetes, namespaces serve as a tool for segregating clusters of resources within a solitary cluster.

Contents of the namespace.yaml are as follows:

apiVersion: v1
kind: Namespace
metadata:
  name: civo-tutorial
  labels:
    name: civo-tutorial

Once applied to a cluster, this file will create a namespace named civo-tutorial inside the Kubernetes cluster. All the following resources will be created in this namespace.

Kubernetes Deployments support only the Rolling Update strategy, that allows gradual updates of an application, minimizing downtime and risk. But since you will perform a blue-reen deployment of your application with Argo Rollouts, you don't need to create the Kubernetes Deployments manually. Instead, you will be creating a Rollout custom resource, which will be used by Argo Rollouts to manage the entire blue-green deployment process.

Contents of the rollout.yaml are as follows:

apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
  name: sample-app
  namespace: civo-tutorial
  labels:
    app: sample-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: sample-app
  template:
    metadata:
      labels:
        app: sample-app
    spec:
      nodeSelector:
        "beta.kubernetes.io/os": linux
      containers:
        - name: civo-argo-rollouts-tutorial
          image: civo-argo-rollouts-tutorial
          ports:
            - name: http
              containerPort: 1337
  strategy:
    blueGreen:
      activeService: svc-active
      previewService: svc-preview
      autoPromotionEnabled: false

This file describes deploying the application using rollout strategy blueGreen. The activeService and previewService options are used to specify the Kubernetes Services that will be used to expose the application.

The autoPromotionEnabled option is set to false so that the preview service won't automatically become the active service. Instead, you will need to use the Argo Rollouts Web UI or CLI to manually promote the preview service to the active service once you are ready.

In this tutorial, since you are performing a blue-green deployment, you need to create 2 Kubernetes Services, one for the Active Rollout, which is mostly deployed in the production environment, and Preview Rollout, which is deployed in any non-production environment, especially for testing. Also, you need to set the Kubernetes Service type to LoadBalancer so that the application can be accessed from outside the cluster.

Contents of the service-active.yaml are as follows:

apiVersion: v1
kind: Service
metadata:
  name: svc-active
  namespace: civo-tutorial
  labels:
    app: sample-app
spec:
  type: LoadBalancer
  ports:
    - port: 80
      targetPort: 1337
  selector:
    app: sample-app

Contents of the service-preview.yaml are as follows:

apiVersion: v1
kind: Service
metadata:
  name: svc-preview
  namespace: civo-tutorial
  labels:
    app: sample-app
spec:
  type: LoadBalancer
  ports:
    - port: 80
      targetPort: 1337
  selector:
    app: sample-app

Notice that the metadata.name fields of the 2 Services matches with the names mentioned in the rollouts.yaml.

The resources must be customized to maintain the updated information to deploy the latest application version on the Kubernetes cluster. This is managed by Kustomize, a tool for customizing Kubernetes configurations during deployment.

Contents of the kustomization.yaml are as follows:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - rollout.yaml
  - service-active.yaml
  - service-preview.yaml
  - namespace.yaml
namespace: civo-tutorial
images:
  - name: civo-argo-rollouts-tutorial
    newName: avik6028/civo-argo-rollouts-tutorial
    newTag: v1

During the Continuous Integration process via GitHub Actions, the newName and newTag fields will be auto-updated with the most recent Docker image information. Consequently, these two fields can also be left empty.

Commit and push these files into the main branch of the GitHub repository you had cloned earlier.

Launching the Civo Kubernetes cluster

In this tutorial, you will be deploying the application on Civo Kubernetes cluster. Therefore, you should have a Civo account and Civo CLI installed on your computer. The CLI should be connected to your Civo account.

You can refer to the Creating a Kubernetes cluster guide to create the cluster.

To be able to follow this tutorial, ensure the following specifications are met, while creating the cluster:

  • Current Region: NYC1
  • Number of nodes: 2
  • Node size: g4s.kube.medium
  • Cluster Type: K3S
  • Applications to be installed from Civo Marketplace:
    • ArgoCD
    • Argo Rollouts
    • Metrics-server
    • Traefik-v2-nodeport

You can view the list of applications that can be installed automatically, from the Civo Marketplace.

Once created, the Civo Kubernetes cluster will take a few minutes to launch.

Civo cluster launch

Configuring Kubernetes manifests for ArgoCD

To configure ArgoCD to deploy your application on Kubernetes, you will have to set up ArgoCD to connect the Git Repository and Kubernetes in a declarative way using YAML for configuration.

Create a directory named argocd in the project's root directory on your computer. Create a new file in the new directory and name it as config.yaml.

You need to paste the following in the config.yaml.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: blue-green-deployment
  namespace: argocd
spec:
  destination:
    namespace: civo-tutorial
    server: 'https://kubernetes.default.svc'
  source:
    path: manifests
    repoURL: 'https://github.com/Lucifergene/civo-argo-rollouts-tutorial'
    targetRevision: deployment
  project: default
  syncPolicy:
    automated:
      prune: false
      selfHeal: false

You need to update the repoURL to the URL of your cloned GitHub repository. This will allow ArgoCD to monitor any kind of changes to your repository continuously.

Note: ArgoCD allows users to sync via manual or automatic policy to deploy applications to a Kubernetes cluster. In this tutorial, we will be using the automatic policy.

Commit and push these files into the main branch of the GitHub repository you had cloned earlier.

Creating the continuous integration pipeline

In this tutorial, you will be using GitHub Actions to create the continuous integration pipeline for the initial stage of the Progressive Delivery process.

GitHub Actions workflows lives in the .github/workflows directory in the project’s root folder in the form of main.yml file, i.e., the path to the configuration is .github/workflows/main.yml.

The contents of main.yml are as follows:

name: Progressive delivery on Kubernetes with Argo Rollouts on Civo Kubernetes

on:
  push:
    branches: [deployment]

jobs:
  build-publish:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - name: Build and push Docker image
        uses: docker/build-push-action@v1.1.0
        with:
          username: ${{ secrets.DOCKER_USER }}
          password: ${{ secrets.DOCKER_PASSWORD }}
          repository: ${{ format('{0}/{1}', secrets.DOCKER_USER, secrets.APP_NAME )}}
          tags: ${{ github.sha }}, latest

  bump-docker-tag:
    name: Bump the Docker tag in the Kustomize manifest
    runs-on: ubuntu-latest
    needs: build-publish
    steps:
      - name: Check out code
        uses: actions/checkout@v3

      - name: Install Kustomize
        uses: imranismail/setup-kustomize@v1
        with:
          kustomize-version: "3.6.1"

      - name: Update Kubernetes resources
        run: |
          cd manifests
          kustomize edit set image ${{ secrets.APP_NAME }}=${{ secrets.DOCKER_USER }}/${{ secrets.APP_NAME }}:${{ github.sha }}

      - name: Commit to GitHub
        run: |
          git config --local user.email "action@github.com"
          git config --local user.name "GitHub Action"          
          git commit -am "Bump docker tag"

      - name: Push changes
        uses: ad-m/github-push-action@v0.6.0
        with:
          github_token: ${{ secrets.GITHUB_TOKEN }}
          branch: ${{ github.ref }}

  argocd-configure:
    name: Configure ArgoCD
    runs-on: ubuntu-latest
    needs: bump-docker-tag
    steps:
      - name: Check out code
        uses: actions/checkout@v3

      - name: Install Civo CLI
        env:
          URL: https://github.com/civo/cli/releases/download/v1.0.32/civo-1.0.32-linux-amd64.tar.gz
        run: |
          [ -w /usr/local/bin ] && SUDO="" || SUDO=sudo
          $SUDO wget $URL
          $SUDO tar -xvf civo-1.0.32-linux-amd64.tar.gz
          $SUDO mv ./civo /usr/local/bin/
          $SUDO chmod +x /usr/local/bin/civo

      - name: Authenticate to Civo API
        run: civo apikey add Login_Key ${{ secrets.CIVO_TOKEN }}

      - name: Save Civo kubeconfig
        run: |
          civo region set ${{ secrets.CIVO_REGION }}
          civo kubernetes config ${{ secrets.CLUSTER_NAME }} --save

      - name: Install Kubectl
        uses: azure/setup-kubectl@v3
        id: install

      - name: Apply ArgoCD manifests on Civo
        run: |
          kubectl apply -f argocd/config.yaml

The CI workflow consists of 3 jobs:

  • docker-publish : Builds and pushes the container to Dockerhub
  • bump-docker-tag : Updates the Docker Image name and tag in the Kustomize manifest
  • argocd-configure : Applies the ArgoCD Configuration on the Kubernetes cluster

In this workflow, we have used some of the popular published actions from the GitHub Actions Marketplace.

GitHub provides a Secret Vault where all the action secrets can be safely stored in encrypted format. These secrets are referenced in the Actions Workflow file. Switch to your repository's Settings tab to add Secrets. Select the Actions option under Secrets from the left panel. Select the New Repository Secret button. On the next screen, type the Secret Name and the value you want it to be assigned to.

Secrets file

The Secrets used in the file are listed below:

  • APP_NAME : Container Image Name (civo-argo-rollouts-tutorial)
  • CIVO_REGION : Default region for the Civo Kubernetes Cluster (NYC1)
  • CIVO_TOKEN : Your Civo API Key for authentication
  • CLUSTER_NAME : Civo Kubernetes Cluster Name (civo-argo-rollouts-tutorial)
  • DOCKER_USER : Your Dockerhub Username
  • DOCKER_PASSWORD : Your Dockerhub Password (API Token preferred)

After adding the secrets, commit and push the changes to your GitHub repository.

You will notice the Action workflow will start running. Once completed, you will see the following:

Secrets CI

Configuring ArgoCD and Accessing the Web Portal

ArgoCD will be pre-installed inside the Kubernetes cluster as mentioned in the cluster creation command.

By default, the ArgoCD API server is not exposed outside the Kubernetes cluster. Therefore, you need to port-forward the ArgoCD API server to access the ArgoCD Web Portal.

To port-forward, use the following command on your computer:

kubectl port-forward svc/argocd-server -n argocd 8080:443

The API server can then be accessed using https://localhost:8080

Note: You can also permanently allocate an external IP to the ArgoCD server. But this method is not advisable for production environments.

Once you have port-forwarded the ArgoCD API server, you must log in to the ArgoCD Web Portal.

To log in, you would need the username and password.

  • The username is set as admin by default.
  • To fetch the password, you need to execute the following command:
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo

You need to use this username-password combination to log in to the ArgoCD portal.

ArgoCD portal

Monitoring the application on ArgoCD Dashboard

Once logged in to the ArgoCD Web Portal, you will land on the Application Tiles page.

Click on the application name to see the Tree View of all the resources currently running on the Kubernetes Cluster, along with their real-time status.

To get a better logical understanding of the traffic flow, you can switch to the Network View.

ArgoCD Network View

From the above image, you can see that both services point to the same set of pods, i.e., single replicaSet. Therefore, you will see the same version of the application when you access either of the service's external IP.

Performing Progressive Delivery using Argo Rollouts

Argo Rollouts will be pre-installed inside the Kubernetes cluster, as well, as mentioned in the cluster creation command.

To view the Argo Rollouts Dashboard, you need to port-forward the Argo Rollouts server.

To port-forward the Argo Rollouts Dashboard, run the following on your machine:

kubectl argo rollouts dashboard

The Argo Rollout Dashboard can now be accessed from http://localhost:3100/rollouts.

Here, you will see the list of all the rollouts in the selected namespace that are currently running on the Kubernetes cluster. Click on the sample-app to view the rollout details.

Argo Rollouts 2

Currently, you will find a single revision of the rollout since you have not yet performed any updates to the application.

To see Argo Rollouts in action, you need to make some changes to the application code. Go to your remote Git repository and make a small change to the index.html file. As a suggestion, you can append version-2 to the title header and commit the changes.

Promoting the new version of the application

Once you have committed the changes, you will notice that the CI workflow has started running. Once completed, you will see two revisions listed under the Revisions section. Revision 1 is marked as the active revision and Revision 2 is marked as the preview revision.

Argo Rollouts 3

If you attempt to access the application using the External IP address of the svc-active service, you will see that the application is still running the previous version. This is because the svc-active service is still directing traffic to the old version of the application, which is Revision 1.

To view and test the new version of the application, you can use the External IP address of the svc-preview service. Once satisfied with the new version, you can promote it to the active revision.

To make Revision 2 the active revision from the Argo Rollouts Dashboard, you can click the Promote< button on the Rollout details page. After clicking the Promote button, you will see that Revision 2 is now the active and stable revision.

Argo Rollouts 4

If you use the External IP address of the svc-active service to access the application, you will find it is running the new version now. This is because the svc-active service is currently directing traffic to the new application version, which is Revision 2. Additionally, you can still access the new version of the application using the External IP address of the svc-preview service.

Performing a rollback

Let's say you have made Revision 2 the active version of the application, but you have found that it's not working as expected. In that case, you can roll back to the previous version, which is Revision 1.

To do this from the Argo Rollouts Dashboard, locate the Rollback button corresponding to Revision 1 and click on it. After clicking the Rollback button, you will see that Revision 1 is removed, and a new Revision 3 is created. Specifically, Revision 3 is the same as the previous version (Revision 1).

However, Revision 2 remains marked as the active and stable revision. This is because the svc-active service is still directing traffic to the new application version, which is Revision 2. To access the previous version of the application, you need to use the External IP address of the svc-preview service.

Argo Rollouts 5

To revert to the previous version of the application, you must first make Revision 3 the active revision. To accomplish this, navigate the Rollout details page in the Argo Rollouts Dashboard and click the Promote button. After clicking the Promote button, Revision 3 will become the active and stable revision.

Argo Rollouts 6

If you use the External IP address of the svc-active service to access the application, you will find it's running the previous version now. This is because the svc-active service is directing traffic to the previous version of the application, which is Revision 3. Additionally, you can still access the previous version of the application using the External IP address of the svc-preview service.

Note: You can also use the Argo Rollouts Kubectl Plugin for listing, promoting, and rolling back the rollout.

To get the details of the Rollout resource, you need to use the following command:

kubectl argo rollouts get rollout sample-app -n civo-tutorial

To promote the new version of the application, you need to use the following command:

kubectl argo rollouts promote sample-app -n civo-tutorial

To rollback to the previous version of the application, you need to use the following command:

kubectl argo rollouts undo sample-app -n civo-tutorial
kubectl argo rollouts promote sample-app -n civo-tutorial

Accessing the Active and Preview Revisions of the Application

You can access the active and preview revisions of the application using the External IP addresses of the svc-active and svc-preview services respectively.

To get the External IP addresses of the svc-active and svc-preview services, you need to use the following command:

kubectl get svc -n civo-tutorial

Copy the IP addresses mentioned under the EXTERNAL-IP column of the svc-active and svc-preview services. Then, paste them into your browser's address bar to access the active and preview revisions of the application respectively.

Argo Rollout Application

In this tutorial, since you have rolled back to the previous version of the application, it is set as the active revision. As a result, both the svc-active and svc-preview services will be directing traffic to the previous version of the application.

Final step

Wrapping Up

In this tutorial, you learned how to perform a blue-green deployment for a Node.js application on a Civo Kubernetes cluster using Argo Rollouts. Progressive delivery of Applications is essential for ensuring the reliability and stability of applications running on Kubernetes. Argo CD and Argo Rollouts offer a robust and efficient way to manage and deploy applications on Kubernetes, and their integration with GitHub Actions makes it easy to incorporate them into our development process.

The complete source code for this tutorial can also be found here on GitHub.