The way applications are deployed on the cloud has been completely transformed by containers and microservices. As a platform for container orchestration, Kubernetes has gained widespread acceptance since its introduction in 2014. It offers a collection of primitives to execute distributed, robust applications. It handles application scaling, automatic failover, and offers deployment patterns and APIs that let you streamline resource management and add new workloads.

One of the key difficulties that developers face is how to focus more on the details of the code than the infrastructure in which that code operates. One of the most effective architectural paradigms to tackle this problem is Serverless.

This tutorial will discuss how to deploy serverless workloads on Kubernetes using Knative and ArgoCD. Throughout, I will be referring to information provided in my previous piece of work, "Deploying web applications on Kubernetes with continuous integration" which can be found here.

What is Serverless?

Serverless computing is an approach that allows building and running applications and services without having to think about the underlying infrastructure of servers. With serverless, as opposed to typical PaaS (Platform as a service), the team can concentrate on the functionality of the service without having to worry about infrastructure issues like scaling and fault tolerance.

Serverless 1.0 vs Serverless 2.0

In Serverless 1.0, cloud vendors provided services specific to them, like AWS Lambda, and Azure Functions, which allowed users to run serverless applications either deployed as single functions or running inside containers. This brought a lot of concerns since they created independent products without the consideration for portability or migration.

In Serverless 2.0, the focus is on portability and interoperability. This is possible with the help of open-source platforms, like Knative and OpenFaaS, that use Kubernetes to abstract the infrastructure from developers, allowing them to deploy and manage applications using serverless architecture and patterns.

Understanding GitOps

GitOps is a powerful and innovative framework that is widely used for implementing and managing modern cloud-native applications. It is a set of practices focussing on a developer-centric experience, using Git as a single source of truth to manage the underlying infrastructure of an application.

To accomplish this, an orchestration system like Kubernetes is crucial. Without Kubernetes, the infrastructure as a whole can be too difficult to manage due to the use of numerous incompatible technologies. Implementing infrastructure-as-code (IaC) procedures effectively without an overarching system is often impossible due to these problems. Thus, the development of GitOps implementation tools was facilitated by Kubernetes' expansion.

A pull request is an essential component of the GitOps procedure. Pull requests are used to add new configuration versions, which are subsequently merged with the main branch in the Git repository before being automatically deployed. All modifications, including all information about the environment at each step of the process, are fully documented in the Git repository.

What is ArgoCD?

ArgoCD is a Kubernetes-native continuous deployment (CD) tool. It can deploy code changes directly to Kubernetes resources by pulling from Git repositories, as opposed to external CD solutions, which can only support push-based deployments. It gives developers the ability to control application updates and infrastructure setup from an unified platform. It handles the latter stages of the GitOps process, ensuring that new configurations are correctly deployed to a Kubernetes cluster.

In this tutorial, you will learn how to deploy a Nodejs application as a serverless workload with Knative on Civo Kubernetes with GitHub Actions and ArgoCD.

Prerequisites

To follow along with this tutorial, you will need a few things first:

After you have all the prerequisites complete you are ready to proceed to the next section.

Cloning the Node.js application

By following this tutorial, you will be able to deploy your first serverless workload on Kubernetes. As the serverless workload, you will be deploying a pre-prepared NodeJS application. For this, you will need to clone the application into your GitHub account to continue with this tutorial.

Clone the Application Repository with the following command:

git clone https://github.com/Lucifergene/knative-deployment-civo.git

There are 2 branches in this repository:

  • main branch: This branch contains only the Nodejs Application code

  • deployment branch: This branch contains the application codes along with all YAML files that we will create in this tutorial.

If you are following this tutorial, then checkout to the main branch.

The Nodejs application lives in the app.js file:

const express = require("express");

const path = require("path");

const morgan = require("morgan");

const bodyParser = require("body-parser");

/* eslint-disable no-console */

const port = process.env.PORT || 1337;

const app = express();

app.use(morgan("dev"));

app.use(bodyParser.json());

app.use(bodyParser.urlencoded({ extended: "true" }));

app.use(bodyParser.json({ type: "application/vnd.api+json" }));

app.use(express.static(path.join(__dirname, "./")));

app.get("*", (req, res) => {

res.sendFile(path.join(__dirname, "./index.html"));

});

app.listen(port, (err) => {

if (err) {

console.log(err);

} else {

console.log(`App at: http://localhost:${port}`);

}

});

module.exports = app;

The key takeaway from this code is the port number on which the application will be running, which is 1337.

You can run the application locally by first installing the dependencies. In the project’s root, type:

npm install

Then run the application with the command:

node app.js

The application should now be up and running at the address http://localhost:1337.

Containerizing the Nodejs application

To deploy serverless workloads on Kubernetes, you will have to containerize your workload. One of the famous containerizing tools available is Docker. To create Docker-based containers, you will have to write a specific type of file known as Dockerfile. This file consists of commands required to assemble an image.

In the root directory of the project, create a new file named Dockerfile.

Copy the following content in the file:

# Set the base image to use for subsequent instructions

FROM node:alpine

# Set the working directory for any subsequent ADD, COPY, CMD, ENTRYPOINT,

# or RUN instructions that follow it in the Dockerfile

WORKDIR /usr/src/app

# Copy files or folders from source to the dest path in the image's filesystem.

COPY package.json /usr/src/app/

COPY . /usr/src/app/

# Execute any commands on top of the current image as a new layer and commit the results.

RUN npm install --production

# Define the network ports that this container will listen to at runtime.

EXPOSE 1337

# Configure the container to be run as an executable.

ENTRYPOINT ["npm", "start"]

To build the container locally, you need to have Docker installed on your system.

Type the following to build and tag the container:

docker build -t knative-deployment-civo:latest .

You can confirm that the image was successfully created with this command:

docker images

Then, you can start the container with the following command:

docker run -it -p 1337:1337 knative-deployment-civo:latest

The Nodejs application should now be up and running at http://127.0.0.1:1337.

Finally, you can commit and push the changes to your GitHub repository.

Configuring Knative Service manifests

In Knative, Services are used to deploy an application. To create an application using Knative, you must create a YAML file that defines a Service. This YAML file specifies metadata about the application, points to the hosted image of the app, and allows the Service to be configured.

Create a directory named knative in the root directory of the project. Then, create a new file in the new knative directory and name it as service.yaml.

Contents of the service.yaml are as follows:


apiVersion: serving.knative.dev/v1

kind: Service

metadata:

creationTimestamp: null

name: knative-deployment-civo

spec:

template:

metadata:

creationTimestamp: null

name: knative-deployment-civo

spec:

containerConcurrency: 0

containers:

- image: docker.io/avik6028/knative-deployment-civo:latest

name: user-container

ports:

- containerPort: 1337

protocol: TCP

readinessProbe:

successThreshold: 1

tcpSocket:

port: 0

resources: {}

enableServiceLinks: false

timeoutSeconds: 300

status: {}

The key takeaway from this code are the spec.template.metadata.name and spec.template.spec.containers[0].image, which denotes the name of the template and the the container image that will be pulled and deployed with Knative on the Kubernetes cluster respectively. These values will be automatically updated with the latest container image information during the Continuous Integration process.

Commit and push these files into the main branch of the GitHub repository you had cloned earlier.

Launching the Civo Kubernetes cluster

In this tutorial, you will be learning to deploy the serverless workload with Knative on Civo Kubernetes cluster. To create the cluster, you should have a Civo account and Civo CLI installed on your computer. The CLI should be connected to your Civo account.

Once completed, you can launch a Civo Kubernetes cluster with the help of Civo CLI.

Set the default region with the following command:

civo region current NYC1

Launch a two-node cluster each of size g4s.kube.medium with the following command:

civo kubernetes create knative-cluster --nodes=2 --size=g4s.kube.medium --create-firewall --applications=argo-cd,metrics-server,Traefik-v2-nodeport --wait

Note: This command will install ArgoCD, Traefik and Metrics Server inside the Kubernetes cluster. You can view the list of applications that can be installed automatically, from the Civo Marketplace.

The Civo Kubernetes cluster will take 2 minutes to launch.

Civo Kubernetes cluster will take 2 minutes to launch

Installing Knative in the Kubernetes Cluster

Once the cluster is up and running, you will have to install Knative inside the cluster to use it for deploying your serverless workload.

To install the application, you will need to use the Civo CLI once again.

Configure kubectl to connect to Civo Kubernetes using the following command:

civo kubernetes config knative-cluster --save

To install the Knative core components and custom resources, you will need to execute the following commands, making sure to substitute the version tag you want from the Knative list of releases:

kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.7.1/serving-crds.yaml

kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.7.1/serving-core.yaml

Knative also requires a networking layer for exposing its services externally. Therefore, you will need to install Kourier, a lightweight Knative Serving ingress, which is now a part of the Knative family. Once again, confirm the version you wish to install and change the command accordingly:

kubectl apply -f https://github.com/knative/net-kourier/releases/download/knative-v1.7.0/kourier.yaml

Finally, you will have to configure Knative Serving to use Kourier by default by running the command:

kubectl patch configmap/config-network \
--namespace knative-serving \
--type merge \
--patch '{"data":{"ingress-class":"kourier.ingress.networking.knative.dev"}}'

DNS configuration obviates the requirement to execute curl commands with a host header. Knative offers a Kubernetes Job called default-domain that sets the default DNS suffix for Knative Serving to be sslip.io.

kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.7.1/serving-default-domain.yaml

Once you execute the commands, Knative will be installed in the knative-serving namespace. To get details of all the resources in the namespace:

kubectl get all --namespace knative-serving

Configuring ArgoCD installed in the Kubernetes Cluster

As discussed in the previous section, ArgoCD will be pre-installed inside the Kubernetes cluster as mentioned in the cluster creation command. Therefore, you don’t need to install the application manually.

The ArgoCD API server installed does not expose an external IP by default. Therefore, you will need to expose the server manually for accessing it from the browser.

There are 2 methods through which you can expose the ArgoCD API server:

Service Type Load Balancer

In this approach, you will change the argocd-server service type to LoadBalancer:

kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'

Port Forwarding

If you want to access the API server without exposing the service, Kubectl port-forwarding can be used in this scenario.

kubectl port-forward svc/argocd-server -n argocd 8080:443

The API server can then be accessed using https://localhost:8080

Note: For this tutorial, you need to follow the first method for exposing the ArgoCD server with an external IP via Service Type Load Balancer as we will be accessing the application from the internet. This will start a Kubernetes load balancer for the service in your Civo account.

Accessing the ArgoCD Web Portal

Once you have exposed the ArgoCD API server with an external IP, you can now access the portal with the external IP Address generated.

Since we had installed ArgoCD in argocd namespace, use the following command to get all the resources in the namespace:

kubectl get all --namespace argocd

Copy the External-IP corresponding to service/argo-cd-argocd-server.

service/argo-cd-argocd-server

You can access the application at http://<EXTERNAL-IP>.

In my case, that was http://212.2.245.80/

Argocd portal

Now, to login the portal, you would need the username and password.

  • The username is set as admin by default.

  • To fetch the password, you need to execute the following command to retrieve the generated secret:

kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo

You need to use this username-password combination to log in to the ArgoCD portal.

Configuring Kubernetes manifests for ArgoCD

To configure ArgoCD to deploy your application on Kubernetes, you will have to set up ArgoCD to connect the Git Repository and Kubernetes in a declarative way using YAML for configuration.

One of the key features and capabilities of ArgoCD is to sync via manual or automatic policy for deployment of applications to a Kubernetes cluster. The differences between these two are:

  • Manual Sync Policy: As the name suggests, through this policy, you will be able to manually synchronize your application via the CI/CD pipelines. Whenever a code change will be made, the CI/CD pipeline will get triggered, which will in-turn call the ArgoCD server APIs to start the sync process based on the changes you will commit. For communicating with the ArgoCD server APIs, you can use either the ArgoCD CLI or can use the SDKs available for various programming languages especially for programmatic access.

  • Automated Sync Policy: Argo CD has the ability to automatically sync an application when it detects differences between the desired manifests in Git, and the live state in the cluster. A benefit of automatic sync is that CI/CD pipelines no longer need direct access to the Argo CD API server to perform the deployment. Instead, the pipeline makes a commit and pushes to the Git repository with the changes to the manifest in the tracking Git repo.

Start by creating a directory named argocd in the root directory of the project. Then, create a new file in the new directory and name it as config.yaml.

For setting up the Manual Sync Policy for ArgoCD, paste the following in the config.yaml. Make sure to edit the repoURL line to match your personal project repository!


apiVersion: argoproj.io/v1alpha1

kind: Application

metadata:

name: knative-deployment-civo

namespace: argocd

spec:

destination:

namespace: nodejs

server: 'https://kubernetes.default.svc'

source:

path: knative

repoURL: 'https://github.com/Lucifergene/knative-deployment-civo'

targetRevision: main

project: default

syncPolicy:

syncOptions:

- CreateNamespace=true

If you want to set to the Automated Sync policy, you need to paste the following in the config.yaml.


apiVersion: argoproj.io/v1alpha1

kind: Application

metadata:

name: knative-deployment-civo

namespace: argocd

spec:

destination:

namespace: nodejs

server: 'https://kubernetes.default.svc'

source:

path: manifests

repoURL: 'https://github.com/Lucifergene/knative-deployment-civo'

targetRevision: main

project: default

syncPolicy:

automated:

prune: false

selfHeal: false

syncOptions:

- CreateNamespace=true

After this, you need to commit and push these files into the main branch of the GitHub repository you had cloned earlier.

Creating the continuous integration pipeline

The objective of this tutorial is to show how to deploy the serverless workload with Knative on Kubernetes through continuous integration via GitHub Actions and continuous deployment via ArgoCD.

To create the CI pipeline, we will be using GitHub Actions integrated with your GitHub account. GitHub Actions workflows live in the .github/workflows directory in the project’s root folder in the form of main.yml file, i.e., the path to the configuration is .github/workflows/main.yml.

The contents of main.yml we will need to create are as follows:


name: NodeJS Deployment on Civo K8s using Knative and ArgoCD

on:
push:
branches: [main]
jobs:
build-publish:
runs-on: ubuntu-latest

steps:

- uses: actions/checkout@v2
- name: Build and push Docker image

uses: docker/build-push-action@v1.1.0

with:
username: ${{ secrets.DOCKER_USER }}
password: ${{ secrets.DOCKER_PASSWORD }}
repository: ${{ format('{0}/{1}', secrets.DOCKER_USER, secrets.APP_NAME )}}
tags: ${{ github.sha }}, latest

bump-docker-tag:
name: Bump the Docker tag in the Knative Service manifest
runs-on: ubuntu-latest
needs: build-publish

steps:
- name: Check out code

uses: actions/checkout@v2
- name: Install yq - portable yaml processor

env:

URL: https://github.com/mikefarah/yq/releases/download/3.3.4/yq_linux_amd64

run: |

[ -w /usr/local/bin ] && SUDO="" || SUDO=sudo

$SUDO wget $URL

$SUDO mv ./yq_linux_amd64 /usr/local/bin/yq

$SUDO chmod +x /usr/local/bin/yq

- name: Update Knative Service manifest

run: |

yq w -i knative/service.yaml spec.template.metadata.name "${{ secrets.APP_NAME }}-${{ github.run_id }}-${{ github.run_attempt }}"

yq w -i knative/service.yaml spec.template.spec.containers[0].image "docker.io/${{ secrets.DOCKER_USER }}/${{ secrets.APP_NAME }}:${{ github.sha }}"

- name: Commit to GitHub

run: |
git config --local user.email "action@github.com"
git config --local user.name "GitHub Action"
git commit -am "Bump docker tag"

- name: Push changes

uses: ad-m/github-push-action@v0.6.0

with:
github_token: ${{ secrets.GITHUB_TOKEN }}
branch: ${{ github.ref }}


argocd-configure:

name: Configure ArgoCD

runs-on: ubuntu-latest

needs: bump-docker-tag

steps:

- name: Check out code

uses: actions/checkout@v2

- name: Install Civo CLI

env:

URL: https://github.com/civo/cli/releases/download/v1.0.32/civo-1.0.32-linux-amd64.tar.gz

run: |

[ -w /usr/local/bin ] && SUDO="" || SUDO=sudo

$SUDO wget $URL
$SUDO tar -xvf civo-1.0.32-linux-amd64.tar.gz
$SUDO mv ./civo /usr/local/bin/
$SUDO chmod +x /usr/local/bin/civo

- name: Authenticate to Civo API

run: civo apikey add Login_Key ${{ secrets.CIVO_TOKEN }}

- name: Save Civo kubeconfig

run: |

civo region set ${{ secrets.CIVO_REGION }}
civo kubernetes config ${{ secrets.CLUSTER_NAME }} --save

- name: Install Kubectl

uses: azure/setup-kubectl@v3

id: install

- name: Apply ArgoCD manifests on Civo

run: |

kubectl apply -f argocd/config.yaml


# Paste the following only when you opt for the ArgoCD manual-sync-policy:

argocd-manual-sync:

name: Sync the ArgoCD Application manually

runs-on: ubuntu-latest

needs: argocd-configure

steps:

- name: Check out code

uses: actions/checkout@v2

- name: Install ArgoCD CLI

env:

URL: https://github.com/argoproj/argo-cd/releases/latest/download/argocd-linux-amd64

run: |

[ -w /usr/local/bin ] && SUDO="" || SUDO=sudo

$SUDO curl --insecure -sSL -o /usr/local/bin/argocd $URL

$SUDO chmod +x /usr/local/bin/argocd

- name: ArgoCD CLI login

run: argocd login ${{ secrets.ARGOCD_SERVER }} --insecure --username ${{ secrets.ARGOCD_USERNAME }} --password ${{ secrets.ARGOCD_PASSWORD }}

- name: Manual sync

run: argocd app sync ${{ secrets.APP_NAME }}

- name: Wait for application to reach a synced and healthy state

run: argocd app wait ${{ secrets.APP_NAME }}

The CI workflow defined above consists of 3 jobs:

  • docker-publish: Builds and pushes the container to Dockerhub

  • bump-docker-tag: Updates the Knative Service YAML with the latest container image tag

  • argocd-configure: Applies the ArgoCD Configuration on the Kubernetes cluster

  • argocd-manual-sync: This job is needed only when you will be opting for the manual-sync-policy. For automatic-sync, you can omit this job from the file.

In this workflow, we have used some of the published actions from the GitHub Actions Marketplace.

Note: Since the above Actions Workflow file has to be pushed to GitHub, you cannot add sensitive information in the file. To store these secrets, GitHub provides a Secret Vault where all the action secrets can be safely stored in encrypted format. These secrets are referenced in the Actions Workflow file.

To add Secrets, switch to the Settings tab of your repository on GitHub. Select the Actions option under Secrets, from the left panel. Select the New Repository Secret button. On the next screen, type the Secret Name and the value you want it to be assigned to.

add Secrets to your repository on GitHub

The Secrets used in the file are listed below:

  • APP_NAME: Container Image Name (knative-deployment-civo)

  • ARGOCD_PASSWORD: ArgoCD portal password

  • ARGOCD_SERVER: ArgoCD Server IP Address

  • ARGOCD_USERNAME: ArgoCD portal username (admin)

  • CIVO_REGION: Default region for the Civo Kubernetes Cluster

  • CIVO_TOKEN: Civo API Key for authentication

  • CLUSTER_NAME: Civo Kubernetes Cluster Name (knative-cluster)

  • DOCKER_USER: Dockerhub Username

  • DOCKER_PASSWORD: Dockerhub Password (API Token preferred)

After adding the secrets, commit and push the changes to your GitHub repository. This time, you will notice the Action workflow has started running. Once completed, you will see the status as Success.

GitHub repository success

Monitoring the application on ArgoCD Dashboard

If you were able to see a green tick for all the jobs in the previous step, it means the application has successfully been deployed on the Kubernetes cluster.

To observe and monitor the resources that are currently running on the Kubernetes cluster, you need to login to the ArgoCD Web Portal.

Once logged in, you will see the Applications page:

Monitoring the application on ArgoCD Dashboard

Now, click on the application name and this will redirect you to a page where you can see the tree view of all the resources that are currently running on the Kubernetes Cluster along with their real-time status.

tree view of all the resources that are currently running on the Kubernetes Cluster

Accessing the application running on Knative

To access the application, you would need the DNS Name of the route created by the Knative Service.

Since, we had created all the resources in nodejs namespace, use the following command to get all the resources in the namespace:

kubectl get all --namespace nodejs

Copy the URL corresponding to service.serving.knative.dev/knative-deployment-civo.

service.serving.knative.dev/knative-deployment-civo

You can access the application with the URL. In my case, that was http://knative-deployment-civo.nodejs.212.2.242.109.sslip.io:

Basic application deployed

Wrapping Up

Thus, we have reached the end of the tutorial. By following this guide, you learnt how to develop an automated CI pipeline for deploying your serverless workload continuously on a Kubernetes cluster following GitOps practices with Knative and ArgoCD. Once the pipeline is properly configured, any changes made to the application code will be instantly reflected on the workload URL. There is no further need for configuring and deploying applications on Kubernetes manually.

The complete source code for this tutorial can also be found here on GitHub.

About the author

Avik Kundu is a Software Engineer at Red Hat, a full-stack developer, open source contributor, and tech content creator proficient in DevOps and Cloud. He has written articles and tutorials on various tools and technologies and loves to share his knowledge in public. If you are interested, you can also follow him on Twitter.