Christian F suggested one of our hottest requested features in saying "Having the provision of load balancers built into the dashboard would be great.". Well, in line with our vision of being community driven, once we had some critical updates and rewrites out the of the way, we've been hard at work building out this feature.

What is a load balancer

Put simply if you have two or more instances with us, you want the traffic to your website to be spread between them. You can do this with DNS, if you add multiple A records for your website address (e.g. have point to and, but the problem is then that you send half of your traffic always to one of those IP addresses and if it goes down, then half of your traffic goes into the abyss too!

By using the Civo API, you can make a simple HA solution by doing as above, having multiple instances running and moving the public IP of down instances to other running instances - we even have a guide on how to do that.

However, that isn't a true load balancing solution - the traffic isn't spread in any meaningful way and you have to script your own solution to ensure high availability. So, we've now launched proper Load Balancers.

Creating a load balancer

You should see an option saying "Load Balancers" on the left hand side. For example, here's how it will look when we're listing instances that we'll later balance traffic between (each instance is simply Ubuntu, with Apache 2 installed and the index HTML file on each edited to include the server name).

lb-1. backends.png

The next stage is to click on that link in the left bar and then click on the button to "Create load balancer"

lb-2. backend listing empty .png

Configuring your load balancer

The form that results from clicking that button could be seen as quite complicated, although we've tried to make it as simple as possible. For example, the minimum details you could enter would be the hostname you want to serve traffic for, choose one or more backends and click the big blue "Save" button. We'll use an example hostname of going forward. If you want to serve HTTPS (SSL/TLS) traffic, just dropdown the Protocol option and choose "HTTPS". The site will automatically change the port to be "443" and will actually setup a redirect from to for you.

You need to paste in your signed TLS Certificate and TLS Private Key that you have from when you created the certificate (we'd recommend for all your TLS certificate needs).

If you want to change any of the advanced settings, now is a good time to do so. The policy is the most important choice, and we'll describe each of the options below. The Health check path is the URL that the load balancer should connect to on your backend instances to see if they're alive/OK. A 2xx/3xx response code from the servers will keep them in the pool, a 4xx/5xx response will take them out until they start returning a 2xx/3xx again.

The maximum request size puts a limit on, for example, uploaded files or data. The Fail timeout is how long you want the load balancer to wait for a response from a backend.Fail timeout is how long the load balancer should remember that a backend had a failure for. If it gets more than 1 failure within that period, it will mark it as unavailable (a failed request is a time out or failure to connect, not just a 4xx/5xx response). The Max connenctions per backend determines how many concurrent requests each backend can handle. Finally, if you are listening for HTTPS requests on your backend, but just using self-signed certificates, you can ignore these TLS errors by checking the Ignore invalid backend TLS certificates option. So, back to the all important policy types:

  • Least connections - by keeping track of how many requests each backend is currently handling, send a new request to the least busy server.

  • Random - sends each new request to a random backend, regardless of how busy they are to statistically evenly spread the requests.

  • Round robin - sends each new request to the next backend in order.

  • IP address hash - calculates from the IP address the request originated an index of your active backends and always sends requests from that IP address to the same server.

Once you've configured the options, you then need to just choose which of your backend servers should be in the pool, by clicking the green arrow to add them to the list. You can then choose how the load balancer should connect to each backend (HTTP/HTTPS and port number). Note: You can only connect to instances with a public IP address (because our load balancers are outside of your private networks).

lb-3 . backend creation form.png

After clicking the big blue Save button, it may take up to one minute for your load balancer configuration to be deployed to our suite of load balancers.


The final step we need to do is configure the DNS for your hostname. The easiest way to do this is by going to Civo's DNS management section and adding a new CNAME pointing your hostname to

lb-5. add dns.png

You then should be able to see the new record in your list:

lb-6. DNS added.png

If you are using the root of your domain (e.g. rather than you should instead point to the IP addresses for our load balancer (these may change/be added to in future, but we'll try to announce that on Twitter and update them here. So, you could set A records to point to,,, and


If you have a firewall applied to your web server instances, please make sure you allow inbound traffic from the range of IP address above, we won't automatically punch them through the firewall for you. This is true whether you use a local firewall (UFW or iptables) or our firewall service.


Finally you should be able to hit your domain name (in our case in a browser and see one of the random backends in the page. Let us know on Intercom if you have any questions, or feel free to raise a suggestion if there are enhancements you'd like to have!

lb-7. result.png