Terraform is an infrastructure as code tool that enables users to define infrastructure in a declarative manner using Hashicorp configuration language (HCL). Developed by Hashicorp and released in 2014, Terraform has become an industry standard for provisioning infrastructure.

On the other hand, Helm is a package manager specifically designed for Kubernetes. It simplifies the management and deployment of complex applications by packaging them into a single unit called a "chart." These charts can be published to a central registry, making it easier for developers to discover and utilize them in their Kubernetes deployments.

In this post, we will explore how to get the best of both worlds with Terraform and Helm to deploy the Emissary Ingress to a Kubernetes cluster, create Kubernetes deployments using HCL, and expose the deployment with the Emissary Ingress.

Prerequisites

This tutorial assumes some familiarity with Kubernetes as well as Terraform, in addition, you would need the following tools installed on your machine to follow along:

Creating a cluster

We’ll begin by creating a Kubernetes cluster. For simplicity, we will be doing it from the CLI:

civo k3s create --create-firewall --nodes 2 -m --save --switch --wait emissary-experiments -r=Traefik

Using the -r flag, I removed the default ingress controller (Traefik) as we will be using the Emissary Ingress to expose applications, this is to ensure that Traefik does not interfere with Emissary.

Using the -m flag tells the Civo command line to merge the kubeconfig for the cluster with our existing kube-config.

Configuring the Providers

Providers allow Terraform to communicate with external APIs such as cloud providers and, for our use case, Kubernetes and Helm. Let’s configure providers for Helm and Kubernetes.

To do this, create a file called main.tf and paste the code below into the file:

# main.tf
terraform {
  required_providers {
    helm = {
      source = "hashicorp/helm"
      version = "2.10.1"
    }
        kubernetes = {
      source = "hashicorp/kubernetes"
      version = "2.21.1"
    }
  }
}

provider "helm" {
  kubernetes {
    config_path = "~/.kube/config"
}
}

provider "kubernetes" {
  config_path    = "~/.kube/config"
}

In the above code, we declared Helm and Kubernetes providers in their respective blocks and passed the location of our kubeconfig using the config_path.

The next thing we are going to do is initialize Terraform by running the terraform init command. This will download and initialize the Helm and Kubernetes providers.

Deploying Emissary-ingress

To install Emissary-ingress using the Helm provider, we will use the helm_release resource to deploy the Helm chart. Within a directory of your choosing, create a file called main.tf, open it up in your favorite text editor, and follow along with the code below.

# main.tf
terraform {
  required_providers {
    helm = {
      source  = "hashicorp/helm"
      version = "2.10.1"
    }
    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = "2.21.1"
    }
  }
}

provider "helm" {
  kubernetes {
    config_path = "~/.kube/config"
  }
}

provider "kubernetes" {
  config_path = "~/.kube/config"
}

resource "null_resource" "apply-crds" {
  provisioner "local-exec" {
    command = "kubectl apply -f <https://app.getambassador.io/yaml/emissary/3.7.0/emissary-crds.yaml>"
  }
  
}

resource "helm_release" "emissary_ingress" {
  name       = "emissary-ingress"
  repository = "<https://app.getambassador.io>"
  chart      = "emissary-ingress"
  version    = "8.7.0"
  skip_crds  = false
  depends_on = [null_resource.apply-crds]
    create_namespace = true 
}

In the above code, we introduce a new resource named nullresource, which is used to install the custom resource definitions that Emissary-ingress depends on. This doesn’t come bundled in the chart, so we have to install it onto the cluster before deploying the chart. We then used helmrelease to install version 8.7.0 of emissary-ingress and then used the create_namespace field to tell Terraform to create the namespace if it doesn’t exist.

Since we introduced a new resource, you’d need to re-initialize Terraform. So, go ahead and run the terraform init command on your CLI.

Now we can safely deploy the Emissary-ingress by running the terraform apply command.

In a couple of minutes, the Emissary-ingress would be deployed successfully. We can verify the installation by running the following:

kubectl get all -n emissary-system
## sample output 
NAME                                   READY   STATUS    RESTARTS      AGE
pod/emissary-apiext-6448c4c8f7-p2grs   1/1     Running   0             4h1m
pod/emissary-apiext-6448c4c8f7-65r7q   1/1     Running   0             4h1m
pod/emissary-apiext-6448c4c8f7-59sdt   1/1     Running   2 (56m ago)   4h1m

NAME                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/emissary-apiext   ClusterIP    0.0.0.0           443/TCP   4h1m

NAME                              READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/emissary-apiext   3/3     3            3           4h1m

NAME                                         DESIRED   CURRENT   READY   AGE
replicaset.apps/emissary-apiext-6448c4c8f7   3         3         3       4h1m

Deploying a sample application

Now that we have Emissary-ingress installed, let’s deploy an application to test it out. For this demonstration, we would be deploying the whoami from Traefik. Within main.tf, add the following resources.

# main.tf 
resource "kubernetes_manifest" "whoami" {
  manifest = {
      apiVersion = "apps/v1"
      kind = "Deployment"
      metadata = {
          name = "whoami"
          namespace = "default"
      }
      spec = {
          replicas = 1
          selector = {
              matchLabels = {
                  app = "whoami"
              }
          }
          template = {
              metadata = {
                  labels = {
                      app = "whoami"
                  }
              }
              spec = {
                  containers = [
                      {
                          name = "whoami"
                          image = "traefik/whoami"
                          ports = [
                              {
                                  containerPort = 80
                              }
                          ]
                      }
                  ]
              }
          }
      }
  } 
}

resource "kubernetes_manifest" "whoami-svc" {
    manifest = {
        apiVersion = "v1"
        kind = "Service"
        metadata = {
            name = "whoami"
            namespace = "default"
        }
        spec = {
            selector = {
                app = "whoami"
            }
            ports = [
                {
                    protocol = "TCP"
                    port = 80
                    targetPort = 80
                }
            ]
        }
    }
}

Using the kubernetes_manifest resource, we created a deployment and service for the whoami image. Apply the changes to your cluster: terraform apply.

The image and service should be deployed in a couple of minutes. We can verify using

kubectl get deployment whoami
NAME     READY   UP-TO-DATE   AVAILABLE   AGE
whoami   1/1     1            1           4h

Exposing the service

Now that we have an application running in the cluster, let’s expose it. To do this, we’ll need two new resources. A Listener and a Mapping:

  • The Listener CRD defines where and how the Emissary-ingress should listen for requests from the network and which Host definitions should be used to process those requests.
  • The Mapping CRD allows you to map a resource to a service. This will be further illustrated shortly.

Earlier, we installed the CRD, so no extra steps are required here. Head back to main.tf and update it with the code below:

#main.tf 
terraform {
  required_providers {
    helm = {
      source  = "hashicorp/helm"
      version = "2.10.1"
    }
    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = "2.21.1"
    }
  }
}

provider "helm" {
  kubernetes {
    config_path = "~/.kube/config"
  }
}

provider "kubernetes" {
  config_path = "~/.kube/config"
}

resource "null_resource" "apply-crds" {
  provisioner "local-exec" {
    command = "kubectl apply -f "
  }
  
}

resource "helm_release" "emissary_ingress" {
  name       = "emissary-ingress"
  repository = ""
  chart      = "emissary-ingress"
  version    = "8.7.0"
  skip_crds  = false
  depends_on = [null_resource.apply-crds]
}

resource "kubernetes_manifest" "whoami" {
  manifest = {
      apiVersion = "apps/v1"
      kind = "Deployment"
      metadata = {
          name = "whoami"
          namespace = "default"
      }
      spec = {
          replicas = 1
          selector = {
              matchLabels = {
                  app = "whoami"
              }
          }
          template = {
              metadata = {
                  labels = {
                      app = "whoami"
                  }
              }
              spec = {
                  containers = [
                      {
                          name = "whoami"
                          image = "traefik/whoami"
                          ports = [
                              {
                                  containerPort = 80
                              }
                          ]
                      }
                  ]
              }
          }
      }
  } 
}

resource "kubernetes_manifest" "whoami-svc" {
    manifest = {
        apiVersion = "v1"
        kind = "Service"
        metadata = {
            name = "whoami"
            namespace = "default"
        }
        spec = {
            selector = {
                app = "whoami"
            }
            ports = [
                {
                    protocol = "TCP"
                    port = 80
                    targetPort = 80
                }
            ]
        }
    }
}

resource "kubernetes_manifest" "emissary_ingress_listener" {
  manifest = {
    apiVersion = "getambassador.io/v3alpha1"
    kind       = "Listener"
    metadata = {
      name      = "emissary-ingress-listener-8080"
      namespace = "emissary-system"
    }
    spec = {
      port          = 8080
      protocol      = "HTTP"
      securityModel = "XFP"
      hostBinding = {
        namespace = {
          from = "ALL"
        }
      }
    }
  }
}

resource "kubernetes_manifest" "whoami-mapping" {
  manifest = {
    apiVersion = "getambassador.io/v3alpha1"
    kind       = "Mapping"
    metadata = {
      name = "whoami-mapping"
      namespace = "default"
    }
    spec = {
      hostname = "*"
      prefix   = "/whoami"
      service  = "whoami"
    }
  }
}

In the above code, we created a new resource named kubernetes_manifest, for a listener and a mapping. The listener defines a port to listen to HTTP requests, while the mapping maps a resource to a service. In this case, we map the /whoami prefix to the whoami service. With these resources added, we can now expose the service using Emissary Ingress.

Apply the changes using Terraform apply

Using some kubectl magic, we can output the external IP address of the load balancer Emissary-ingress created when we deployed the helm chart.

export LB_IP=$(kubectl get svc emissary-ingress --output=jsonpath='{.status.loadBalancer.ingress[0].ip}')

In the snippet above, we extract the IP address of the load balancer by filtering the output of kubectl get svc using jsonpath. When the snippet is run, it assigns the output of the kubectl command to the variable LB_IP.

Finally, we can test it works by running this curl http://<LB_IP>/whoami command. This will return an output similar to the one below:

# output 
Hostname: whoami-848ddc4d99-qjbcj
IP: 127.0.0.1
IP: ::1
IP: 10.42.0.14
IP: fe80::306b:12ff:fe99:1e25
RemoteAddr: 10.42.0.13:52868
GET / HTTP/1.1
Host: 212.2.243.123
User-Agent: curl/7.87.0
Accept: */*
X-Envoy-Expected-Rq-Timeout-Ms: 3000
X-Envoy-Internal: true
X-Envoy-Original-Path: /whoami
X-Forwarded-For: 10.42.0.1
X-Forwarded-Proto: http
X-Request-Id: 3ac660f1-c284-4b28-b406-ea56f8da221a

Summary

In this post, we successfully deployed Emissary-ingress and used it to expose a sample application. With Terraform and Helm, we were able to automate the deployment process.

While writing your deployments in HCL might not be for everyone, having your configuration in a single place and format means you spend less time switching contexts when configuring your Kubernetes services and less time debugging indentation errors (YAML).

If you’re looking to learn more about Terraform, here are a couple of ideas: