Since opening our #KUBE100 beta (apply to join) we've had a lot of questions from our members on what the differences are between full-blown Kubernetes (K8s) and k3s, aside from the choice from each on how to capitalise a "K" (or not).

What is Kubernetes?

For those of you not in the know, Kubernetes is a "container orchestration platform". This effectively means taking your containers (everyone's heard of Docker by now, right?) and choosing which machine out of a group of them to run that container on.

It also handles things like upgrades of your containers, so if you make a new release of your website, it will gradually launch containers with the new version and gradually kill off the old ones, usually over a minute or two.

So what is K8s?

K8s is just an abbreviation of Kubernetes ("K" followed by 8 letters "ubernete" followed by "s"). However, normally when people talk about either Kubernetes or K8s, they are talking about the original upstream project, designed as a really highly available and hugely scalable platform by Google.

For example, here's an example of a Kubernetes cluster handling a zero downtime update while performing 10 million requests per second on YouTube.

The problem is that while you can run Kubernetes on your local developer machine with Minikube, if you're going to run it in production you very quickly get in to the realm of "best practices" with advice like:

  1. Separate your masters from your nodes - your masters run the control plane and your nodes run your workload - and never the twain shall meet.
  2. Run etcd (the database for your Kubernetes state) on a separate cluster to ensure it can handle the load.
  3. Ideally have separate Ingress nodes so they can handle the incoming traffic easily, even if some of the underly nodes are slammed busy

Very quickly this can get you to 3 x K8s masters, 3 x etcd, 2 x Ingress plus your nodes. So a realistic minimum of 8 medium instances before you even get to "how many nodes do I need for my site?".

Don't misunderstand us, if you're running a production workload this is VERY sane advice. There's nothing worse than trying to debug a down production cluster that's overloaded, late on a Friday night!

However, if you want to just learn Kubernetes, or maybe host a development/staging cluster for non-essential things - it feels like a little overkill, right? At least it does to us. If I want to fire up a cluster to see if my Kubernetes manifests (configuration for the deployments, etc) are correct - I'd rather not incur a cost of over a hundred dollars per month to do it.

How does k3s fit in to this?

Rancher Labs is a big player in the Kubernetes arena. Their flagship product Rancher is an amazing GUI for managing and installing Kubernetes clusters. They have released a number of pieces of software that are part of this ecosystem, for example Longhorn which is a lightweight and reliable distributed block storage system for Kubernetes. However, to bring us back to topic, they are also the authors of k3s .

K3s is designed to be a single binary of less than 40MB that completely implements the Kubernetes API. In order to achieve this, they removed a lot of extra drivers that didn't need to be part of the core and are easily replaced with add-ons.

K3s is a fully CNCF (Cloud Native Computing Foundation) certified Kubernetes offering. This means that you can write your YAML to operate against a regular "full-fat" Kubernetes and they'll also apply against a k3s cluster.

Due to its low resource requirements, it's possible to run a cluster on anything from 512MB of RAM machines upwards. This means that we can allow pods to run on the master, as well as nodes.

And of course, because it's a tiny binary, it means we can install it in a fraction of the time it takes to launch a regular Kubernetes cluster! We generally achieve sub-two minutes to launch a k3s cluster with a handful of nodes, meaning you can be deploying apps to learn/test at the drop of a hat.

Sounds good, so it's just the same but better?

Well, kind of. When most people think of Kubernetes they think of containers automatically being brought up on other nodes (if the node dies), of load balancing between containers, of isolation and rolling deployments - and all of those advantages are the same between "full-fat" Kubernetes and k3s.

However, it's not all sunshine and roses, if that was the case everyone would be using k3s. So why not...

Firstly, currently (k3s v0.8.1) there is only the option to run a single master. This means if your master goes down then you lose the ability to manage your cluster any more (although all your existing containers will continue to run). If you want to run an external database platform, you can launch now with multiple masters. And there is some work being done for k3s v1.0 GA to support multiple masters natively within k3s.

Secondly, the default database in single master k3s clusters is SQLite. This is great for small databases not seeing much action, but can quickly become a major pain if they are being hammered! However, the changed happening in a Kubernetes control plane are more about frequently updating deployments, scheduling pods, etc - so the database load isn't too much for a small dev/test cluster.

Which should I choose?

If you want production, we'd recommend a full Kubernetes installation - we have a guide on installing Kubernetes on Civo with Kubespray.

If you want a learning playground, development or staging cluster, why not apply for our #KUBE100 service - it's much cheaper, quicker to launch and you'll get some free credit each month to try it out while we are in beta!