Automate GitOps Pipeline for Node.js with Flux CD on Kubernetes
Learn how to automate your GitOps pipeline for a Node.js application using Flux CD on Kubernetes. Streamline deployments and easily update versions with this tutorial.
Written by
SRE Team Lead at Civo
Written by
SRE Team Lead at Civo
In this tutorial I'll show you how to build a GitOps pipeline for a Node.js application built with Express.js. Rather than deploying new versions manually, flux will deploy it to Kubernetes whenever a build of the Docker image is available.

Flux Authors & CNCF®
The components
- Kubernetes will be required for this tutorial, so you can either bring your own existing cluster, use Civo's managed k3s product, or
k3sup - A free GitHub account for Flux to monitor your config repo
- Flux CD was created by Weaveworks and is now hosted within the Cloud Native Computing Foundation, a neutral home for OSS. Flux can apply Kubernetes manifest YAML files to your cluster from a Git repository. Its true power comes in being able to bump the versions of images as they are produced by your CI system.
- Helm3 is the successor to Helm 2 and tightens up security
- Flux Helm Operator - the Helm operator is not required for use with Flux, but makes a good pairing, so that Flux applies a custom resource, and then the helm operator installs the selected version of the chart.
- Express.js is one of the most popular microservices frameworks for Node.js and makes it easy to define APIs, add authentication, integrate with middleware, and to serve static sites.
Why FluxCD, and what are the alternatives?
At the end of the tutorial, every new version of our app will be automatically updated in the cluster. What's more, if we delete our cluster by accident, we can recover quickly because all of our resources are defined in our Git repository. This means we can easily re-create them in a new cluster.

Flux is one of the best-known tools for CD within the CNCF landscape and has been the topic for many sessions, tutorials, and workshops at KubeCon.

There are other tools available for continuous deployment including Argo from Intuit. Argo may be more suited to developers who prefer a graphical dashboard and visualisation of their cluster state. There is some good news though, Argo and Flux will be merging some core components, so watch this space.
You may enjoy this video session from KubeCon: Panel: GitOps User Stories with Weaveworks and Intuit
Tutorial
If you have an intermediate to advanced level of experience with Kubernetes and Helm, then this tutorial may take you around 1-2 hours.
Creating a Kubernetes cluster
If you're a part of Civo, then create a new cluster in your Civo dashboard and configure your kubectl to point at the new cluster.
For a full walk-through of Civo k3s you can see Alex Ellis's blog post - The World's First Managed k3s
We can create a K3s cluster using the Civo CLI.
This will take a couple minutes, once finished the --save flag will point your kubectl context to the new cluster. The command is:
$ civo kubernetes create --nodes 2 --save --switch --wait {cluster-name}
NOTE: Substitute a suitable name for your cluster in the {cluster-name} placeholder
Before going any further, check that you are pointing at the correct cluster:
kubectl config get-contextskubectl get node -o wide
Setting up Helm 3
If you're using MacOS or Linux simply run the below:
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
If you're a Windows user, then install Git Bash and then run the above in a new terminal, or try Chocolatey:
choco install kubernetes-helm
Check the installation:
$ helm versionversion.BuildInfo{Version:"v3.0.0", GitCommit:"e29ce2a54e96cd02ccfce88bee4f58bb6e2a28b6", GitTreeState:"clean", GoVersion:"go1.13.4"}
Create a GitHub repository for Flux
flux will store its state in a separate repository to your application code.
- CI or continuous integration builds new binaries or Docker images
- CD or continuous delivery deploys new versions of those previously built images
For this reason Flux uses a separate code and config repo.
Fork my repo under your own account:
https://github.com/alexellis/k8s-expressjs-flux
Install Flux CD
fluxctl is the CLI to control and configure flux on your cluster.
- Install Helm v3 and fluxctl for macOS with Homebrew:
brew install fluxctl
- On Windows you can use Chocolatey:
choco install fluxctl
- Install the Helm:
kubectl apply -f https://raw.githubusercontent.com/fluxcd/helm-operator/master/deploy/flux-helm-release-crd.yaml
- Release Kubernetes Custom Resource Definition (or CRD):
Custom Resource Definitions allow developers to create their own objects with custom schemas for Kubernetes. This CRD represents a Helm chart release, but other CRDs may represent functions, such as in OpenFaaS and its Operator.
- Install Flux and the Helm Operator
Add FluxCD repository to Helm repos:
helm repo add fluxcd https://charts.fluxcd.io
- Create a namespace for flux
kubectl create namespace fluxcd
- Install fluxcd and point it at your fork of my repo:
export USER="alexellis"helm upgrade -i flux fluxcd/flux --wait \--namespace fluxcd \--set git.url=git@github.com:$USER/k8s-expressjs-flux.git
Flux uses an SSH key to read and/or write to your GitHub repository, this is called a Deployment Key. Get the pair and then add it to your GitHub repository's deployment keys:
- Open GitHub,
- Navigate to your repository,
- Go to Settings > Deploy keys click on Add deploy key,
- Check Allow write access,
- Paste the Flux public key and click Add key.
kubectl -n fluxcd logs deployment/flux | grep identity.pub | cut -d '"' -f2

- Install the HelmRelease Operator
The Helm release operator installs a release of a Helm chart to your cluster.
helm upgrade -i helm-operator fluxcd/helm-operator --wait \--namespace fluxcd \--set git.ssh.secretName=flux-git-deploy \--set helm.versions=v3
You'll see in the git.ssh.secretName field that the flux-git-deploy deployment key is used for the operator. We also specify that we want to use Helm 3 here.
View the sample app
Our test app serves a webpage which makes an API call back to retrieve some JSON. The JSON is rendered on the client-side in the browser and can be extended as required.
View the sample: alexellis/expressjs-k8s
View the config repo and HelmRelease
I used version 0.1.1 of the chart, but here's how you can find what version is available for your chart:
# First add the helm repohelm repo add expressjs-k8s https://alexellis.github.io/expressjs-k8s/# Then run an updatehelm repo update# Now searchhelm3 search repo expressjs-k8s
We set up a HelmRelease object for Flux to apply to our cluster in the /releases/ folder:
apiVersion: helm.fluxcd.io/v1kind: HelmReleasemetadata:name: expressjs-k8sspec:chart:repository: https://alexellis.github.io/expressjs-k8s/name: expressjs-k8sversion: 0.1.1values:ingress:enabled: false
The first section says where the release CRD will be applied, for instance we can specify a Kubernetes namespace here such as dev or prod.
The second part is the spec, here we can state the repository URL we'd normally use with a helm command and the target version.
The values file can be used to control versions of Docker images or other settings like whether to create ingress records.
Verify that the application was applied:
$ fluxctl sync --k8s-fwd-ns fluxcdSynchronizing with ssh://git@github.com/alexellis/k8s-expressjs-flux.gitRevision of master to apply is 23f65efWaiting for 23f65ef to be applied ...Done.
After syncing, we'll now see the HelmRelease custom resources created:
kubectl get helmrelease -ANAMESPACE NAME RELEASE STATUS MESSAGE AGEdefault expressjs-k8s default-expressjs-k8s deployed Helm release sync succeeded 6m
You can get even more detail with kubectl describe helmrelease/expressjs-k8s
We can also see the effect of the Helm Operator, which installed the Helm chart at the version we specified 0.1.1:
$ kubectl get deploy -o wideNAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTORdefault-expressjs-k8s 1/1 1 1 84s expressjs-k8s alexellis2/service:0.3.5 app.kubernetes.io/instance=default-expressjs-k8s,app.kubernetes.io/name=expressjs-k8s
Optionally, you can invoke the service:
$ kubectl port-forward deploy/default-expressjs-k8s 8080:8080 &# Then:$ curl -s localhost:8080/links |jq[{"name": "github","url": "https://github.com/alexellis"},{"name": "twitter","url": "https://twitter.com/alexellisuk"},{"name": "blog","url": "https://blog.alexellis.io"},{"name": "sponsors","url": "https://github.com/users/alexellis/sponsorship"}]# Finally:kill %1
Or view the main website in a browser:

Notice the Copyright is set to 2019, that won't do since we're now in 2020 at time of writing.
In the following steps we'll update the code and then publish a new Docker image. The way we're currently using Flux would need us to update our chart, republish it and then update Flux's config repo. You'll see how to make this all automatic through the use of Semantic Versioning.
Automate deployments for new versions
Now Flux can apply our HelmRelease definition automatically, and the Helm Operator will then install the chart, but there's more we can do.
Flux can now automate new versions of the chart for changes according to a set of versioning policies for semver, such "as always update to a new version of a patch release".
From semver.org:
Given a version number MAJOR.MINOR.PATCH, increment the:
MAJOR version when you make incompatible API changes, MINOR version when you add functionality in a backwards compatible manner, and PATCH version when you make backwards compatible bug fixes. Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format.
So we should be able to write a policy that increments all PATCH versions without us having to manually touch our cluster. Flux has write access through the deployment key, which is how we can make a permanent change from i.e. 0.1.1 to 0.1.2.
When we're done, we'll also see commit message from Flux in our config repo. You can probably see why having a shared repo for code and config wouldn't work for Flux, it would end up in a loop.

Source: Image by Stefan Prodan, Weaveworks
This diagram shows how Flux can scan the Docker images that you've pushed and then apply new versions through the values.yaml or spec override for a Helm chart.
apiVersion: helm.fluxcd.io/v1kind: HelmReleasemetadata:name: expressjs-k8sannotations:fluxcd.io/automated: "true"filter.fluxcd.io/chart-image: semver:~0.3spec:chart:repository: https://alexellis.github.io/expressjs-k8s/name: expressjs-k8sversion: 0.1.1values:ingress:enabled: falseimage: alexellis2/service:0.3.5
What did we change to automate release bumping?
fluxcd.io/automated: "true"- this was added as an annotation to enable automationfilter.fluxcd.io/chart-image: semver:~0.3was added to update any images that have a patch release for 0.3, but if we want to move to 0.4, we'd have to update the string to match that.image: alexellis2/service:0.3.5we added to the values, which representsvalues.yamlin a normal, manual Helm installation
Now we can push a new version of the expressjs-k8s Docker image, i.e. from version alexellis2/service:0.3.5 to alexellis2/service:0.3.5. This matches our semver notation of ~0.3.
See Flux in action
Here's our list of Git commits, we can see that Flux made a successful patch to the config repo:

This is the code diff:

And we can also see the updated version and HelmRelease in the cluster:
$ kubectl get helmrelease/expressjs-k8s -o yamlapiVersion: helm.fluxcd.io/v1kind: HelmReleasemetadata:annotations:filter.fluxcd.io/chart-image: semver:~0.3fluxcd.io/automated: "true"fluxcd.io/sync-checksum: 6030a2af3ca9b68aef5475c5819b5e35a6bf019aspec:chart:name: expressjs-k8srepository: https://alexellis.github.io/expressjs-k8s/version: 0.1.1values:image: alexellis2/service:0.3.6ingress:enabled: false
And the version has been applied and is serving the 2020 copyright string:
$ kubectl get deploy -o wideNAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGESdefault-expressjs-k8s 1/1 1 1 31m expressjs-k8s alexellis2/service:0.3.6
Run the port-forward command from earlier and then open a browser:

Source: Image by author
From here, it's over to you to build your own applications and deploy them to your Kubernetes cluster using the power of Flux and Continuous Delivery.
Troubleshooting
You can troubleshoot Flux by looking at its logs like this:
kubectl logs -n fluxcd deploy/flux
And you can get the logs of the Helm Operator like this:
kubectl logs -n fluxcd deploy/helm-operator
The Helm Operator applies its own CRD called HelmRelease, you can find this resource with:
kubectl get HelmRelease --all-namespaces
For any named release, you can then describe it, for more details and events:
kubectl describe helmrelease/expressjs-k8s -n default
Wrapping up
We now have an example of how to continuously deploy our Express.js application to our Kubernetes cluster of choice.
Note that whilst the helm operator is currently included in the Flux GitHub repository, it will be extracted to a separate component later, and Flux can be used to apply any kind of Kubernetes objects and CRDs.
If you want to use secrets with your application, you can encrypt them using SealedSecrets, a project from Bitnami Labs. You can even deploy OpenFaaS and a set of OpenFaaS functions using the HelmRelease Operator, see this great tutorial by one of the Flux maintainers (Stefan Prodan) for more: Applying GitOps to OpenFaaS with Flux Helm Operator.
See also:
- Flagger - Progressive Delivery Operator for Kubernetes using Flux
- Applying GitOps to OpenFaaS with Flux Helm Operator
- Deep Dive: Flux the GitOps Operator for Kubernetes - Stefan Prodan, Weaveworks
To find out more about Flux and to connect with its community, see the project homepage.

SRE Team Lead at Civo
As a seasoned IT professional with over 15 years of experience, Ian has honed his skills in cloud engineering, DevOps, and site reliability, holding various roles across multiple companies. At Civo, Ian has been an integral part of the team as a Site Reliability Engineer and SRE Team Lead since 2019, bringing his expertise to the table. With a strong background in managing complex infrastructure, Ian is well-equipped to drive reliability and innovation.
Share this article
Further Reading
14 November 2023