Over time, Kubernetes has established itself as the leading container orchestration platform. It enables the management of a vast number of containerized microservices at scale, providing capabilities such as availability, self-healing, dynamic application scaling, service discovery, traffic routing, and security.

However, when new features and power are introduced into the technology landscape, a lot of challenges can arise, and Kubernetes is no exception to this rule.

One of the major challenges when running Kubernetes is how we can manage application deployments. This process involves the creation of multiple objects such as services, deployments, secrets, and configMaps, and each requires its own specific configuration, which is usually done using declarative manifests written in YAML.

When deploying applications to Kubernetes in a traditional way, you’ll need to create these YAML files for each object. This involves adjusting the specification of each object in the file, and then creating them individually on the cluster to get your application running.

Now imagine doing this for hundreds, or thousands, of microservices, each with its specific objects, and also with different configurations for different environments (testing, staging, production). This, of course, requires a lot of manual work and becomes error-prone and harder to manage.

Helm to the rescue!

Helm simplifies application deployments on Kubernetes by packaging all required application objects and configurations and deploying them as a single entity. It also allows templating different configurations to match different environments for the deployment. This way, we can have a more consistent and efficient model to manage our deployments on Kubernetes.

Throughout this tutorial, we’re going to explore what Helm is and why we need it. We’ll discover the different features that Helm offers to allow for seamless Kubernetes deployments, and we’ll explain the different Helm components and how each one of them works.

What is Helm?

Helm is usually referred to as the Kubernetes package manager, because it resembles normal package managers in their functionality. So, before we get into what Helm is, let’s briefly cover what package managers do.

An introduction to package managers

Package managers simplify installing, upgrading, configuring, and removing software packages. These packages usually contain application software along with its metadata like version, description, and required dependencies.

Instead of manually installing software by downloading the necessary files, placing each one in its specific location, checking and installing each dependency with its correct version, and applying other required configurations, package managers seamlessly and consistently execute all these tasks on your behalf, so you don’t have to worry about each detail. You usually apply a simple command using the package manager and provide it with the package you want, and aims to automatically handle everything else for you.

Package managers also simplify software distribution by using what are called repositories. Repositories are central locations where packages are stored and can be shared. Package managers then connect to these repositories to fetch and upload packages.

An introduction to Helm

Now, going back to Helm, it similarly applies the concepts behind package managers, but specifically for Kubernetes applications. Helm simplifies the installation, upgrade, and removal of Kubernetes applications from a cluster. Instead of manually deploying each Kubernetes object (such as Service, Deployment, ConfigMap, Secret) to install a specific application, Helm wraps all these components into a single package called a chart. We can then use Helm commands to install these charts on Kubernetes.

Civo An introduction to Helm

Helm also manages dependencies, so when a Kubernetes application depends on another service or application, Helm can automatically download and install this dependency chart.

As with other package managers, charts are stored in specific Helm repositories, which simplifies the distribution and sharing of Kubernetes applications. Helm can connect these repositories to download or upload charts.

Why Do We Need Helm?

Helm offers several key features that can help you overcome the challenges encountered when managing application deployments on Kubernetes. To better understand the “why” behind using Helm, it’s also important to recognize these Kubernetes challenges.

So, let’s explore the features of Helm and the corresponding challenges it aims to solve:

Package management

This is the part we’ve discussed previously about wrapping application components inside a single package called a chart. This enables automatic and consistent application installation on Kubernetes.

By using Helm charts, we avoid the manual process of deploying and managing each Kubernetes resource individually.


When managing multiple Kubernetes environments, such as testing, staging, and production, we often need to deploy our resources with different specifications tailored to each environment. For instance, we might deploy only one replica of a pod for testing, while deploying three replicas for staging and production.

Instead of maintaining multiple YAML manifests for each object to accommodate different environments, Helm offers a way to template the YAML manifests. This is achieved by creating a single YAML file and dynamically injecting values that correspond to each environment.

Civo why Do We Need Helm, Templating

For example, we can create a template YAML file for a Deployment object and create separate values files for each environment. We can then instruct Helm which values to use in a specific deployment, depending on the environment we are targeting.

Versioning and release management

Another challenge with Kubernetes deployments is keeping track of the application versions installed on the cluster. It’s difficult to maintain the history of each object deployed as part of an application and be able to easily upgrade or roll back to a specific version or state of the application.

Helm addresses this problem by introducing the concept of releases. A Helm release is simply a deployed instance of a Helm chart. In other words, when we install our application on Kubernetes using a Helm chart, we’re actually creating a release.

Helm keeps track of the versions of each release. For instance, after an application is installed on a cluster, Helm marks this release with a specific version. Later, we might need to upgrade this release after making some modifications to the chart.

Helm versioning and release management

Using a command, Helm applies the changes to the current release and marks it with a newer version. This process continues with each upgrade to our release, and Helm keeps a record of each version with its respective changes.

Now, we can roll back to a specific version of our release using another Helm command. Helm will apply the required changes to bring each resource within the release back to its previous state. This demonstrates how Helm consistently and seamlessly handles versioning and applies upgrade/rollback operations to our applications.

How Does Helm Work?

Helm communicates with the Kubernetes cluster in a client-server architecture to create and manage resources. Each version of Helm implements this architecture in a different way.

Currently, there are two major Helm versions, v2 and v3. Let’s explore how each of them works and the architectural differences between them.

Comparing Helm v2 and Helm v3

Feature/Aspect Helm v2 Helm v3
Architecture Client-Server (Helm Client and Tiller Server) Client-Only
Major Components Helm Client and Tiller Server Helm Client (includes Helm library)
Helm Client Role Sends requests to Tiller; handles local chart development, managing repositories Directly interacts with kube-api server; handles chart rendering, managing repositories
Tiller Server Role Interacts with Kubernetes API; manages resources; tracks releases Removed in Helm v3
Installation Location Tiller installed in Kubernetes cluster (kube-system namespace) or locally No in-cluster component; operates entirely from client-side
Security Model Relies on Tiller for access control; complex with Kubernetes RBAC Simplified; uses user's kubeconfig for access control and permissions
Resource Management Tiller server creates and manages resources via Kubernetes API Helm client directly manages resources via Kubernetes API
Release Tracking Tiller server tracks releases; stores state in ConfigMaps alongside Tiller Client stores release state and information as Secrets in the corresponding namespace
Chart Rendering Managed by Tiller server Managed by Helm client using the integrated Helm library
Operational Complexity Higher due to Tiller management and security configurations Simpler, more streamlined operation without Tiller

Helm v2

Helm v2 introduced a client-server architecture in the Kubernetes ecosystem, consisting of the Helm client and the Tiller server. This design was pivotal in Helm's early adoption, catering to the needs of Kubernetes environments before the introduction of Role-Based Access Control (RBAC).

The Helm client in v2 is a command-line interface that facilitates chart development and repository management. It communicates with the Tiller server, which is typically deployed within the Kubernetes cluster, to execute Helm commands.

Civo Helm v2

One of the key operations in Helm v2 is the deployment of Helm charts. This process involves the Helm client sending a chart and its configuration to the Tiller server, which then renders the necessary Kubernetes manifests and interacts with the Kubernetes API to deploy the application.

The necessity of Tiller stemmed from the limitations in early Kubernetes versions, which lacked sophisticated access control mechanisms. Tiller played a crucial role in managing access and executing commands in shared cluster environments. However, with the advancement of Kubernetes, particularly the introduction of RBAC, managing Tiller's access control became complex, paving the way for the architectural changes seen in Helm v3.

Helm v3

Helm v3 marks a significant shift in Helm's architecture by eliminating the Tiller server, and transitioning to a client-only model. This change simplifies operations and enhances security, leveraging the user's local kubeconfig file for cluster communication and permission management.

In this new model, the Helm client is responsible for more than just sending commands: it actively renders chart templates and configurations. This is achieved through an integrated Helm library, which processes the Helm charts and generates the necessary Kubernetes YAML manifests.

Civo Helm v3

The operational workflow in Helm v3 remains similar to that of Helm v2, starting with the helm install command. However, the process now involves the Helm client directly handling the chart templates and configurations, rendering them, and sending the resulting Kubernetes manifests to the cluster without an intermediary.

A notable advancement in Helm v3 is how it handles release state and information. Unlike Helm v2, which stored this data in ConfigMaps in the same namespace as the Tiller server, Helm v3 stores release information as Kubernetes Secrets directly within the corresponding namespace of each release. This approach aligns with the overall design philosophy of Helm v3, focusing on simplicity, security, and direct interaction with the Kubernetes environment.

Helm Charts Explained

Now that we have a basic understanding of how Helm works, let’s dive into another key concept of Helm: the chart.

As mentioned earlier, a Helm chart is a package for a Kubernetes application. It includes all the required application resources, such as Deployment, Service, ConfigMap, etc. Instead of deploying and configuring each of these resources individually, Helm charts enable us to manage the whole application as a single entity.

As we’ve seen previously, with the use of Helm charts, we can easily install an application on Kubernetes by typing a single command of helm install <application>. Helm then takes care of all the underlying details for creating each resource on the cluster with the proper configuration.

One of the major benefits of Helm charts is that they provide a consistent way of deploying applications. We can use the helm install command for the same chart multiple times on different clusters and expect the same resources and configuration to be created.

This is much like installing traditional software packages where we can install a specific package using a package manager multiple times on different machines, and expect the same results without any configuration skew.

So, what does a chart really look like?

A chart is a collection of files structured in a particular hierarchy. They provide Helm with the information required to deploy a specific application. We can create a chart locally from scratch using the helm create command, or we can download it from a chart repository with the helm pull command.

Let’s have a quick look at the contents of a chart:

What does a Helm chart look like

First things first, the name of the directory that contains the chart files is the name of the chart itself, so here, our chart name is mychart.

Inside that chart directory, we can find the following structure:

  • Chart.yaml: This file contains metadata about the chart like name, version, maintainers, minimum required Helm or Kubernetes version, and other information that describes the application.
  • charts: This directory contains the charts that this chart depends on. Much like downloading some packages requires other dependency packages, this folder contains the dependency charts for this chart.
  • templates: This directory contains template manifests for every Kubernetes resource required by the application, like Service, Deployment, or configMap. Helm uses the Go template language to render resource templates under this directory and generate corresponding Kubernetes YAML manifests to be passed to Kubernetes.
  • values.yaml: This file contains dynamic values to be injected into the template files to generate the Kubernetes manifest. Helm uses the values in this file as the default values, but we can override it by using another file and passing it as a parameter with the helm install command. This allows for different environments to have different replica counts, for example.

Whether you create your own chart from scratch or download a ready-to-use chart, Helm will expect to find this chart directory structure to be able to render and install the chart.

Luckily, if you build your own chart, the helm create command will generate a sample chart directory for you, so you don’t have to create each file individually:

Creating your own Helm Chart

How Do Helm Charts Work?

Let's quickly walk through a simple example to see Helm charts in action and understand how they work.

Getting the Helm Chart

The first step in working with a Helm chart is to obtain the chart itself. As we’ve discussed, we can build our own chart from scratch, or we can get a ready chart from one of the chart repositories.

For our scenario, we'll download a pre-made chart for simplicity:

#helm repo add bitnami https://charts.bitnami.com/bitnami
"bitnami" has been added to your repositories

#helm pull bitnami/wordpress


Here, we used the helm pull command to download a Helm chart for wordpress from the bitnami repository. We’ll discuss chart repositories in a later section, but for now, all we need to know is that we use repoName/chartName to get a chart from a specific repo.

The chart is downloaded as a tar archive. If we unpack it, we’ll see a directory structure similar to what we’ve discussed before:

#tar -xvf wordpress-19.1.2.tgz
#tree wordpress
├── Chart.lock
├── Chart.yaml
├── README.md
├── templates
│   ├── NOTES.txt
│   ├── _helpers.tpl
│   ├── config-secret.yaml
│   ├── deployment.yaml
│   ├── externaldb-secrets.yaml
│   ├── extra-list.yaml
│   ├── hpa.yaml
│   ├── httpd-configmap.yaml
│   ├── ingress.yaml
│   ├── metrics-svc.yaml
│   ├── networkpolicy-backend-ingress.yaml
│   ├── networkpolicy-egress.yaml
│   ├── networkpolicy-ingress.yaml
│   ├── pdb.yaml
│   ├── postinit-configmap.yaml
│   ├── pvc.yaml
│   ├── secrets.yaml
│   ├── serviceaccount.yaml
│   ├── servicemonitor.yaml
│   ├── svc.yaml
│   └── tls-secrets.yaml
├── values.schema.json
└── values.yaml

It's important to note that the /charts folder contains other charts, which are dependencies for the main chart.

Getting the Helm Chart code

This dependency management is one of the chief benefits of Helm: it simplifies how we control our chart dependencies and automatically downloads them when needed.

Inspecting the Helm Chart files

Let’s check part of the resource files under the /templates directory and see what they look like:

#cat templates/deployment.yaml 
apiVersion: {{ include "common.capabilities.deployment.apiVersion" . }}
kind: Deployment
  name: {{ include "common.names.fullname" . }}
  namespace: {{ .Release.Namespace | quote }}
  labels: {{- include "common.labels.standard" ( dict "customLabels" .Values.commonLabels "context" $ ) | nindent 4 }}
  {{- if .Values.commonAnnotations }}
  annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
  {{- end }}
  {{- $podLabels := include "common.tplvalues.merge" ( dict "values" ( list .Values.podLabels .Values.commonLabels ) "context" . ) }}
    matchLabels: {{- include "common.labels.matchLabels" ( dict "customLabels" $podLabels "context" $ ) | nindent 6 }}
  {{- if .Values.updateStrategy }}
  strategy: {{- toYaml .Values.updateStrategy | nindent 4 }}
  {{- end }}
  {{- if not .Values.autoscaling.enabled }}
  replicas: {{ .Values.replicaCount }}
  {{- end }}

Here, we’re looking at the deployment.yaml template, this will look very similar to a normal Kubernetes YAML manifest but with some values replaced with variable names.

We won’t go into each detail of the file, but let’s take the replicas field as an example to understand how things work. The replicas field is used by a deployment to indicate the number of Pods we need to be running at any given time. This field usually has an integer number value, but here we can see it declared with this special format:

replicas: {{ .Values.replicaCount }}

So what does this mean?

Well, this is the templating feature of Helm that we talked about earlier.

Helm enables us to inject dynamic values into our templates to generate a resource manifest. These values are represented by the above format and are passed to the template through the values.yaml file.

When referencing these values in our templates, the Helm engine will start replacing each value with its corresponding name from the values.yaml file, after all values are substituted, a final YAML manifest will be generated and passed to Kubernetes to create the resource.

Let’s now take a look at the values.yaml file:

#cat values.yaml
replicaCount: 1
 type: RollingUpdate
   maxSurge: 25%
   maxUnavailable: 25%

Here, the file is populated with key-value pairs arranged in a hierarchical structure. To reference a particular value in our template, we use the .Values keyword followed by the key name in the hierarchy.

For example, to reference the replicaCount, we use .Values.replicaCount, to reference the type key under the updateStrategy we use .Values.updateStrategy.type.

Therefore, when installing this chart, the replica specification in the deployment template should equal 1, as it will use the .Values.replicaCount from the values.yaml file.

Installing the Helm Chart

Now that we understand how things work under the hood, let’s install our chart and check what happens:

#helm install myrelease ./wordpress
NAME: myrelease
LAST DEPLOYED: Sun Jan 21 03:24:39 2024
NAMESPACE: default
STATUS: deployed
CHART NAME: wordpress

Let’s break down our command here. First, helm install will pass the required chart files and values to the Helm engine for rendering. After that, we specify a release name in the command, which is myrelease followed by the path to our chart directory.

A release is a running instance of a chart, this is how Helm tracks the installed charts on the cluster and their version. We can use the same chart to install different releases for the same application, that’s why using a release name beside the chart name is useful.

To verify which releases are currently installed, we can use the command helm ls:

#helm ls
NAME            NAMESPACE       REVISION        UPDATED                                 STATUS          CHART                  APP VERSION
myrelease       default         1               2024-01-21 03:24:39.326454433 +0000 UTC deployed        wordpress-19.1.2       6.4.2

Now let’s check the deployment from a Kubernetes perspective using normal kubectl commands:

#kubectl get deploy
NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
myrelease-wordpress   1/1     1            1           3m4s

Here, we can see our wordpress deployment is created successfully with 1 replica, which was specified by merging the values.yaml file with the deployment template file.

Overriding default Chart values

When working with multiple environments (testing, staging, production) we might need to deploy our chart resources with different configurations on each environment.

We can do this by changing the values in the values.yaml file each time we deploy to a different environment, but it won’t be easy to keep track of all the values for each environment and change them every time.

For example, when deploying to a testing environment, we can set the replicaCount in the values.yaml file to 1. Then we adjust it to 3 when deploying to staging or production. Of course, this will require multiple changes for each environment which will cause a lot of manual work and possibility for errors.

To overcome this, Helm enables us to override the default values.yaml file with another value file that we create, we can then pass this new file to the helm install command to take precedence over the default values.yaml.

This way, we can create a values file for each environment, and then we can specify the file we want depending on the environment we’re deploying to.

Now, let’s test this in our chart:

#cat testing-values.yaml 
replicaCount: 3

Here, we created another values file called testing-values.yaml, we added only one entry for the replicaCount with the value of 3. Let’s try to install the chart now using this new file:

#helm install myrelease ./wordpress -f ./wordpress/testing-values.yaml 
NAME: myrelease
LAST DEPLOYED: Sun Jan 21 03:32:44 2024
NAMESPACE: default
STATUS: deployed
CHART NAME: wordpress

Now the chart is installed successfully, let’s check our Kubernetes deployment:

#kubectl get deploy
NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
myrelease-wordpress   3/3     3            0           3m2s

Voila! our deployment now has 3 replicas instead of 1.

So, how did Helm render the chart values in this scenario?

Helm will simply use the values from the testing-values.yaml file we provided in the command, if it can’t find a key name that matches what is used in the template, it will fall back to the default values.yaml file.

So in our scenario, when we referenced replicas: {{ .Values.replicaCount }} in the deployment template, Helm used the replicaCount key name in the testing-values.yaml, if the replicaCount was not set in the testing-values.yaml, Helm would have used the replicaCount from the default values.yaml file.

Helm Chart Repositories

As normal package managers have their own repositories to host software packages, Helm also has chart repositories to host charts. Chart repositories are locations where chart packages are stored and can be shared.

In its simplest form, a chart repository is an HTTP server that contains chart packages and an index file that describes these charts. It can then serve requests from clients to download the required chart files.

To start working with a repository, we need to add it first to our local Helm instance:

#helm repo add bitnami https://charts.bitnami.com/bitnami
#helm repo list
NAME    URL                               
bitnami https://charts.bitnami.com/bitnami

The helm repo command is used to interact with chart repositories. Here we added a new repo using the subcommand add followed by the repository name, which is bitnami, and then the repo URL. We can then list the repos we have using the list subcommand.

We can search through repositories to find specific charts that we need. Generally, there are two main sources we can search:

Local repositories

These are the repositories that we’ve manually added to our local Helm instance like in the previous example. This search is done against the local index data.

We use the command helm search repo to search our local repositories:

#helm search repo wordpress
NAME                    CHART VERSION   APP VERSION     DESCRIPTION                                       
bitnami/wordpress       19.1.2          6.4.2           WordPress is the world’s most popular blogging ...
bitnami/wordpress-intel 2.1.31          6.1.1           DEPRECATED WordPress for Intel is the most popu…

The Artifact Hub

The Artifact Hub is a publicly available web-based application that contains packages and configurations for different CNCF projects like Helm. When searching the hub, we can find and install charts from a lot of different repositories.

The command helm search hub is used to search through the different repositories under the hub:

#helm search hub wordpress
URL                                                     CHART VERSION   APP VERSION             DESCRIPTION                                       
https://artifacthub.io/packages/helm/kube-wordp...      0.1.0           1.1                     this is my wordpress package                      
https://artifacthub.io/packages/helm/wordpress-...      1.0.2           1.0.0                   A Helm chart for deploying Wordpress+Mariadb st...
https://artifacthub.io/packages/helm/bitnami/wo...      19.1.2          6.4.2                   WordPress is the world’s most popular blogging ...
https://artifacthub.io/packages/helm/bitnami-ak...      15.2.13         6.1.0                   WordPress is the world’s most popular blogging ...
https://artifacthub.io/packages/helm/shubham-wo...      0.1.0           1.16.0                  A Helm chart for Kubernetes

Here we can see that our hub search returns wordpress charts from multiple repositories. We also didn’t need to explicitly define the hub location or URL for our local Helm instance, it automatically understands how to search the hub.

Once we find the chart we’re looking for through any of the above search sources, we can install it directly using our helm install command.


Helm, known as the package manager for Kubernetes, streamlines the process of installing, upgrading, and rolling back applications. It achieves this by bundling all necessary application resources and configurations into a single, manageable unit called a chart. These Helm charts enable the deployment of applications on Kubernetes in a consistent and efficient manner, regardless of the application's complexity.

For those not yet utilizing Helm in their deployment processes, it’s highly recommended to delve into this tool. Helm significantly transforms the management and operational aspects of Kubernetes applications, making it an indispensable asset in modern software deployment.

Further Resources

If you want to get more into the details of Helm, you can check these resources: