Introduction

When we first started our managed Kubernetes beta, we knew utilising K3s as the Kubernetes distribution of choice was the right move. Not only is it light-weight and quick to deploy, K3s has features ideally suited for the scenarios we envisioned our users would encounter. It’s important for us to make sure any service we offer is 100% compatible with industry standards, and K3s allows us to do just that but with simplicity and speed for our users.

We launched the KUBE100 beta on our OpenStack orchestration and virtualisation platform. However, over the few years, we've struggled with OpenStack - upgrades became problematic as the community advice changed from "just run this Ansible playbook" to "you need a clone of your production hardware, install the new version there and then migrate all workloads over". Our experiences in scaling our service to an expanding user base showed that we needed new thinking around how to structure our systems.

So we knew that we'd need a platform that was easy to upgrade and scalable for the growth we're already starting to see as well as predicting over the next 5 years. Kubernetes has served us well for hosting both our own website and API as well as our customers, so remembering the words of Mr Kubernetes, Kelsey Hightower himself; "Kubernetes is a platform for building platforms", we had a direction we wanted to head in.

Today, we’re incredibly happy to be able to announce the long-awaited custom Civo platform, that we're calling CivoStack. Built from the ground up to serve cloud-native computing needs, it is a Kubernetes-based platform for the future. In this blog post we will talk a bit about the vision behind the new platform, the technical implementation, and the architecture decisions that allow us to expand our service to new regions in the months to come.

Vision

Our vision is that we wanted a modern cloud-scale architecture for launching K3s clusters and services. We will partner with technology companies that form a core part of our platform and celebrate our successes together.

With OpenStack, we had a split brain architecture in place. The OpenStack APIs always felt too unstable and often slow to be reliable enough to be the backing behind our website. So we had a constant struggle of keeping our own database and OpenStack in sync. This had to change - we need a single source of truth for each type of information.

Given Kubernetes' declarative styling for resources normally, we definitely wanted to say "I want a K3s cluster, this size, on this network for this customer" and have a Kubernetes operator work on that resource until it was all complete, and then monitor it for problems down the line.

Software architecture

So with a vision in place, we set to building out the details. There were two halves to the project:

  1. Writing software from both the Ruby on Rails-based www.civo.com and api.civo.com point of view, as well as the custom Kubernetes operators written in Golang.

  2. Building an underlying Kubernetes cluster that is easily installed and maintained, for both standing up new regions as well as easily adding capacity to new regions (we call these superclusters to differentiate from clients' K3s clusters when talking about Kubernetes internally).

With OpenStack and our split brain, there was a lot of code involved in sending requests to OpenStack APIs, then queuing background jobs to check for completion in OpenStack side (or re-sending the initial request if OpenStack accepted it and then forgot - which happened more often than I'd have found normal before using OpenStack). For other things, it was a multiple step process, and each step was handled by one or more background jobs. So a lot of time was spent when hunting bugs in trying to determine which of a number of asynchronous processes had gone awry.

We now have a suite of Kubernetes Operators that all handle this process of "you want X state, I'll loop until the actual state matches, ideally getting closer each time". A Kubernetes operator is a combination of a custom type of object in Kubernetes, and a pod that operates on those custom object types. For example, we have a CivoK3sCluster object that we can create, and our K3sOperator will pick up those object creation requests and go on to build out the actual cluster. This allows us to use Kubernetes' super nice RESTful API with YAML-based resources.

So we've solved the problem of split brain, we've solved the problem of lots of crazy background jobs and we've solved the problem of OpenStack's inconsistent and often broken API.

What's new?

So that covers stuff we've fixed, but what does the new architecture bring that's new to the party...

The main thing is that now we have a cloud scale architecture to which we can start to add new services super easily. For example, early this year we're planning to offer both a Database as a Service and Object Storage. They've been often requested, but there's normally a considerable overhead to those projects such as:

  1. Purchasing and installing the hardware for the new service
  2. Managing who's going to install, monitor and upgrade/maintain the underlying software
  3. We need to figure out the service's API and write API adaptors to go from api.civo.com to be able to use the provider's API (and considering the whole split brain thing again)

All of those go away in a Kubernetes-based world. Hardware – it goes in pods on the same superclusters that we are already using. Install, monitor and upgrade – the services we're looking at adding are Kubernetes operators themselves and have built in upgrade/installation features, and are monitored in the same way as we monitor the rest of our supercluster statistics and state. API adaptors - pah! We don't need those because everything uses the same Kubernetes API!

So this has enabled us to get a clean start on a great software architecture that will allow us to quickly add new features going forward.

Hardware architecture

I wrote back in early 2019 about our latest batch of hardware that was coming. Those days feel like a different lifetime now in terms of hardware architecture. Don't misunderstand, that was powerful kit and we were super proud of it - we've just outdone ourselves in replacing it.

Our current OpenStack region (called SVG1 after our datacentre in Stevenage, England) is based on Dell compute nodes running Intel Xeon Gold processors and storing the virtual disks on NetApp's SolidFire SSD-backed storage cluster. We had a 10Gb network between all of the cluster nodes, and we have a 1Gb connection to the outside world/wider internet.

For our new New York (NYC1) CivoStack region, we've gone with a full rack of mostly compute nodes using the OpenCompute Project architecture. OCP kit is super efficient in terms of its energy usage compared to traditional servers. Rather than have a power supply per server, OCP uses a centralised power supplies which feed each compute node directly with DC power. We’re also able achieve extremely high density in a single rack, which helps keep our racks' footprints small, allowing for better economies of scale. All the kit is completely toolless too, so it makes it incredibly easy to install and maintain – like swapping a drive or memory stick, which now takes seconds instead of minutes.

If you haven't seen what makes these racks awesome, there's a good video on YouTube that shows how they are architected and how you can easily remove any of the hardware for replacement.

The racks are configured with compute nodes with 40 CPU Cores each, 256GB of RAM and dual 25Gbit NICs. Each compute node only takes up a third of a shelf too, so we achieve super-high density. The nodes have NVMe drives as standard for all K3s nodes and IAAS VMs. All switches and routers are in fully redundant pairs and have multiple 100Gb uplinks to the connectivity providers.

So we shipped the first of many 1 ton racks of shiny fast computer hardware to New York, but it wasn't running from CivoStack so we needed some way to go from "blank" to "boom!" nice and easily.

OCP Rack

Our SRE team did an outstanding job of building a system that can take a manifest of MAC addresses and, using a single switch in the rack install our base operating system, configure the networking of all the types of nodes, and the switches and routers without needing to be hands-on at all.

From there, a single command will install Kubernetes across them, then install a single CivoStack Operator which will take it from Kubernetes to CivoStack without needing any involvement.

Moving Forward

Our next steps will be to revamp our presence in the UK with a CivoStack replacement for the venerable UK hardware that served our customers until now, as well as build out a region in Asia. Where 2020 was a year of hard work in preparing building the CivoStack cluster system, 2021 will see us on pushing further and faster than we'd have ever thought possible.

We’re only getting started. The feedback from our #KUBE100 beta community has shaped the design of the platform, and we’re not done yet. We want to hear what you think of our managed Kubernetes service so that when we come to fully launch the service out of beta, we have the best possible feature set for you, our users.