A/B testing, sometimes called split testing, is a deployment strategy used to test a specific version of an application by routing a subset (usually defined in percentages) of users to a newer version of said application. This is extremely useful when rolling out a new feature and you are unsure of how it might affect existing users.

By performing a split test you can observe the behaviour of the subset of users who interact with the newer version of your application and make informed decisions on whether to perform a full rollout or rollback in case of an error.

When it comes to A/B testing on Kubernetes, quite a few tools already exist, most of which are bundled with most modern Service Meshes like Linkerd, Istio, or specialized operators such as flagger.

In this post we will explore a different approach to A/B testing on Kubernetes, where we will be using the Nginx ingress to perform an A/B test. There's a good chance you already use Nginx as your Ingress controller so extending it to achieve is a much simpler approach than deploying a full service mesh.

Prerequisites

To follow you would need the following:

Launching a cluster using the Civo CLI

Let's start by creating a Kubernetes cluster with the Nginx ingress controller installed. You can do this from your account or using Civo CLI. For simplicity we will be removing the default Traefik ingress controller.

civo kubernetes create example-cluster --size "g4s.kube.medium" --nodes 2 --wait --save --merge --region NYC1 -r=Traefik -a=Nginx --create-firewall="80;443"

Using the command above would launch a new Kubernetes cluster with the Nginx ingress controller installed, and save the KUBECONFIG to allow you to access the cluster. If you are creating your cluster through the Civo dashboard be sure to install Nginx through the Civo Marketplace and de-select the default Traefik ingress controller. You will also need to download your KUBECONFIG file.

Note that as we are installing the Nginx ingress controller, this will automatically deploy a Load Balancer in your Civo Account. Read this page for more information on Kubernetes load balancers on Civo.

Run the following command to switch your Kubernetes context to your new cluster:

kubectl config use-context example-cluster

Deploying the application

For this demo, I will be deploying a web application I built using Go. It's a single page app that serves an image of a gopher on the index page and the version number on /version.

We'll begin by creating a deployment for v1 and v2 of our application. Create a file called deployment.yaml and populate it with the following code:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: webapp-v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: webapp
      version: v1
  template:
    metadata:
      labels:
        app: webapp
        version: v1
    spec:
      containers:
      - name: webapp-v1
        image: ghcr.io/s1ntaxe770r/webapp-v1
        resources:
          limits:
            memory: "128Mi"
            cpu: "500m"
        ports:
          - containerPort: 8080
            protocol: TCP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: webapp-v2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: webapp
      version: v2
  template:
    metadata:
      labels:
        app: webapp
        version: v2
    spec:
      containers:
      - name: webapp-v2
        image: ghcr.io/s1ntaxe770r/webapp-v2
        resources:
          limits:
            memory: "128Mi"
            cpu: "500m"
        ports:
          - containerPort: 9090

Apply the deployment with the following command:

kubectl apply -f deployment.yaml

Before we expose our deployment we need to create a service. Create a file name service. yaml and add the following code:

apiVersion: v1
kind: Service
metadata:
  name: webapp-v1
spec:
  selector:
    app: webapp
    version: v1
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: webapp-v2
spec:
  selector:
    app: webapp
    version: v2
  ports:
    - protocol: TCP
      port: 80
      targetPort: 9090

Create the service using:

kubectl apply -f service.yaml

Exposing the service

To expose the service create a file called ingress.yaml and add the following code:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: webapp-ingress
  annotations:
    nginx.ingress.kubernetes.io/app-root: /home
spec:
  ingressClassName: nginx
  rules:
  - host: # your civo dns name
    http:
      paths:
      - path: /home
        pathType: Exact
        backend:
          service:
            name: webapp-v1
            port:
              number: 80
      - path: /version
        pathType: Exact
        backend:
          service:
            name: webapp-v1
            port:
              number: 80

be sure to add your Civo DNS name in the hosts field. You can see this on the cluster information page, or if you have the Civo CLI installed you can run the following command to retrieve your DNS name:

$ civo kubernetes show example-cluster - -o custom -f "DNSEntry"
# cluster-id-60cb32e9bc42.k8s.civo.com

Apply the ingress rule using:

kubectl apply -f ingress.yaml

now head over to http://your-civo-dns-name.k8s.civo.com and you should see a page like this:

A browser window with a gopher with its mouth open 1

Performing the split test

To perform the split test we need to create another Ingress for version 2 of our application with some minor changes. Create a file called ingress-v2-canary.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: webapp-v2-canary
  annotations:
    nginx.ingress.kubernetes.io/app-root: /home
    nginx.ingress.kubernetes.io/canary: "true"
    nginx.ingress.kubernetes.io/canary-weight: "60"
spec:
  ingressClassName: nginx
  rules:
  - host: # your civo dns name
    http:
      paths:
      - path: /home
        pathType: Exact
        backend:
          service:
            name: webapp-v2
            port:
              number: 80
      - path: /version
        pathType: Exact
        backend:
          service:
            name: webapp-v2
            port:
              number: 80

Make sure you edit in the DNS name of your cluster to the - host: section in the file above.

Notice two new annotations have been added to the Ingress.

nginx.ingress.kubernetes.io/canary: "true" tells Nginx to enable canary deployments and nginx.ingress.kubernetes.io/canary-weight: "60" is how Nginx knows how much traffic we would like to send to this service - in this case 60%. Before we move ahead let's talk a little bit about the canary annotation.

The canary annotation allows us to enable a canary deployment using the Ingress controller.

In this post, we are using the canary-weight to split traffic by percentages. However, other annotations can be used to route users to a specific version of our application, one of which is the nginx.ingress.kubernetes.io/canary-by-header-value annotation, which as the name suggests would route users to a different path depending on the value of the request header. See this section of the Nginx Ingress docs for more information on canary annotations.

Let's apply the v2 ingress to our cluster:

kubectl apply -f ingress-v2.yaml

Head back to the browser. After a few refreshes you should be greeted with a page like this:

A browser window with a gopher with its mouth open

Alternatively, you can test it by running the following bash script, replacing your cluster DNS name where required:

#! /bin/bash
for i in {1..10}; do
  version=$(curl -sSL cluster-id-60cb32e9bc42.k8s.civo.com/version)
  echo $version
done

When run, it will look something like:

v1
v2
v2
v1
v1
v2
v2
v1
v2
v1

As you can see, you are being served the v2 a proportion of the time, and v1 another.

Now we are confident our application works as expected, it's time to perform a full roll-out.

Rolling out

We'll begin by setting the canary weight to 100 so Nginx can send all traffic to version 2 of our application. Let's edit our ingress-v2-canary.yaml file:

# ingress-v2-canary.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: webapp-v2-ingress
  annotations:
    nginx.ingress.kubernetes.io/app-root: /home
    nginx.ingress.kubernetes.io/canary: "true"
    nginx.ingress.kubernetes.io/canary-weight: "100"
spec:
  ingressClassName: nginx
  rules:
  - host: #your civo dns name
    http:
      paths:
      - path: /home
        pathType: Exact
        backend:
          service:
            name: webapp-v2
            port:
              number: 80
      - path: /version
        pathType: Exact
        backend:
          service:
            name: webapp-v2
            port:
              number: 80

Apply the updated Ingress rule to update the traffic weighting to 100%:

$ kubectl apply -f ingress-v2-canary.yaml

Next, we'll change the version number of our main Ingress to version 2

# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: webapp-ingress
  annotations:
    nginx.ingress.kubernetes.io/app-root: /home
spec:
  ingressClassName: nginx
  rules:
  - host: # your civo dns name
    http:
      paths:
      - path: /home
        pathType: Exact
        backend:
          service:
            name: webapp-v2
            port:
              number: 80
      - path: /version
        pathType: Exact
        backend:
          service:
            name: webapp-v2
            port:
              number: 80

Apply the updated Ingress rule:

$ kubectl apply -f ingress.yaml

Finally delete the canary ingress for v2 because Nginx allows a maximum of one canary Ingress can be applied per Ingress rule (see here for more), and having 1 Ingress is a lot easier to think about.

$ kubectl delete -f ingress-v2-canary.yaml

Conclusion

In this post, we looked at how to perform an A/B test using the Nginx Ingress controller and how A/B testing could be useful in testing a new feature and observing user behavior.

All the code used in this post is available over here.

References