For several days I have been playing around with the idea of installing HAProxy as my ingress for Kubernetes, since they announced that they support it. I really liked the idea since I have used HAProxy on other occasions (a long, long time ago) such as load balancing for LDAP, MySQL and other web services, and the truth is that I have nothing bad to say about its performance. So I wrote this guide to illustrate how to use HAProxy as a Kubernetes ingress to accompany the Civo Kubernetes Marketplace application I submitted. If you want to follow along, make sure you are signed up to the Civo Managed Kubernetes. You will need a Civo account to build a cluster with the guide.

What is HAProxy?

From the official documentation:

HAProxy is a free, very fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications. It is particularly suited for very high traffic web sites and powers quite a number of the world's most visited ones.

Who uses HAProxy?

Being open source, and therefore continually tested and improved on by the community, HAProxy is trusted by a number of the world's leading companies and cloud providers, including Airbnb and Instagram. It provides extensive support for modern architectures (including microservices) and deployment environments from the cloud to containers and even appliances.

The fun part - installation

The easiest way to install HAProxy is from the Civo Application marketplace. You can of course do it manually using Helm or Kubectl, but for simplicity of this guide I will be describing how to do it from the Marketplace.

If you are provisioning a new instance, make sure you uncheck Traefik to remove it from your cluster, and choose HAProxy instead. See the picture below for reference:

HAProxy in the Marketplace

Then simply create your cluster with any other required applications you wish. Once your cluster is running, make sure you obtain the KUBECONFIG for it by saving the file and placing it where kubectl will know to find it. The easiest way to do this is by using the Civo CLI tool, you can simply run civo k8s config yourclustername --save --merge and switch to the new context if needed.

If you are adding HAProxy to an existing Civo cluster, you would have to remove the default Traefik ingress.

HAProxy Demo

The demo we are going to use is a small image that shows a text and tells us the name of the pod, and we will deploy it to our cluster as follows. First, save the following file as hello-world.yaml or similar:

apiVersion: v1
kind: Service
metadata:
  name: hello-kubernetes-custom
spec:
  type: NodePort
  ports:
  - port: 8080
    targetPort: 8080
  selector:
    app: hello-kubernetes-custom
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-kubernetes-custom
spec:
  replicas: 3
  selector:
    matchLabels:
      app: hello-kubernetes-custom
  template:
    metadata:
      labels:
        app: hello-kubernetes-custom
    spec:
      containers:
      - name: hello-kubernetes
        image: paulbouwer/hello-kubernetes:1.8
        ports:
        - containerPort: 8080
        env:
        - name: MESSAGE
          value: I just deployed this on Civo Kubernetes using HAProxy!

Then apply the file to your cluster using kubectl: sh kubectl apply -f hello-world.yaml

Now, to allow our application to be served to the outside world, we will create our ingress like this:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: web-ingress
  namespace: default
spec:
  rules:
  - host: da90b815-20e7-433b-84ce-391841ecf5ef.k8s.civo.com
    http:
      paths:
      - path: /
        backend:
          serviceName: hello-kubernetes-custom
          servicePort: 8080

There are a few things to note in the above snippet. In my case the URL Civo assigned for my clusterwas da90b815-20e7-433b-84ce-391841ecf5ef.k8s.civo.com, so you should modify that line to match your cluster's assigned DNS name. As you are already running HAProxy as the ingress controller for your cluster, you just need to apply the above file once edited and saved:

kubectl apply -f ingress.yaml

If you visit your cluster's URL at port (in my case, da90b815-20e7-433b-84ce-391841ecf5ef.k8s.civo.com), you should be able to see the hello-world message confirming that the object has been successfully applied and is being handled through HAProxy.

To see the HAProxy statistics you can visit the same cluster URL through port 1024. To make modifications to your HAProxy configuration, you can use a ConfigMap. All the options are can be viewed here.

Of course, in place of the above demo you could deploy any service and it would get served by the cluster through the ingress. It is only a simple example showing how to use it.

Benchmark test

This is a small test that I did to this server with a HAProxy ingress and 11 replicas of the service, using ab (Apache benchmark) with 50000 connections and 1k concurrent:

root@www:/home/admin# ab -n 50000 -c 1000 http://da90b815-20e7-433b-84ce-391841ecf5ef.k8s.civo.com/
This is ApacheBench, Version 2.3 <$Revision: 1757674 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking da90b815-20e7-433b-84ce-391841ecf5ef.k8s.civo.com (be patient)
Completed 5000 requests
Completed 10000 requests
Completed 15000 requests
Completed 20000 requests
Completed 25000 requests
Completed 30000 requests
Completed 35000 requests
Completed 40000 requests
Completed 45000 requests
Completed 50000 requests
Finished 50000 requests

Server Software:
Server Hostname:        da90b815-20e7-433b-84ce-391841ecf5ef.k8s.civo.com
Server Port:            80

Document Path:          /
Document Length:        686 bytes

Concurrency Level:      1000
Time taken for tests:   29.366 seconds
Complete requests:      50000
Failed requests:        0
Total transferred:      44400000 bytes
HTML transferred:       34300000 bytes
Requests per second:    1702.64 [#/sec] (mean)
Time per request:       587.322 [ms] (mean)
Time per request:       0.587 [ms] (mean, across all concurrent requests)
Transfer rate:          1476.51 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        1   11  34.6      3    1121
Processing:     3  567 798.6    209    3481
Waiting:        3  567 798.6    209    3481
Total:          4  577 800.0    223    3860

Percentage of the requests served within a certain time (ms)
  50%    223
  66%    511
  75%    730
  80%    886
  90%   2019
  95%   2579
  98%   2963
  99%   3134
 100%   3860 (longest request)

That means that the cluster was able to handle 1000 concurrent HTTP requests and serve each one without fail. Not bad for a single k3s cluster!

I hope you have found this guide interesting and useful. I think HAProxy could be a good ingress option for Kubernetes - let me know if you agree or disagree. I can be found on Twitter at @alejandrojnm and Civo at @civocloud