Since we launched the #KUBE100 – the world’s first k3s powered, managed Kubernetes service – we’ve had a lot of questions from our members on what the differences are between k3s and k8s (full blown Kubernetes), aside from the choice from each on how to capitalise a "K" (or not).
Before we continue, if you haven’t already applied to join the #KUBE100 beta, find out more and apply for access here. You’ll get exclusive access to our Slack community and $70 free credit a month to have a play with the beta.
For those of you not in the know, Kubernetes is a "container orchestration platform". This effectively means taking your containers (everyone's heard of Docker by now, right?) and choosing which machine out of a group of them to run that container on.
It also handles things like upgrades of your containers, so if you make a new release of your website, it will gradually launch containers with the new version and gradually kill off the old ones, usually over a minute or two.
K8s is just an abbreviation of Kubernetes ("K" followed by 8 letters "ubernete" followed by "s"). However, normally when people talk about either Kubernetes or K8s, they are talking about the original upstream project, designed as a really highly available and hugely scalable platform by Google.
For example, here's an example of a Kubernetes cluster handling a zero downtime update while performing 10 million requests per second on YouTube.
The problem is that while you can run Kubernetes on your local developer machine with Minikube, if you're going to run it in production you very quickly get in to the realm of "best practices" with advice like:
Separate your masters from your nodes - your masters run the control plane and your nodes run your workload - and never the twain shall meet.
Run etcd (the database for your Kubernetes state) on a separate cluster to ensure it can handle the load.
Ideally have separate Ingress nodes so they can handle the incoming traffic easily, even if some of the underly nodes are slammed busy
Very quickly this can get you to 3 x K8s masters, 3 x etcd, 2 x Ingress plus your nodes. So a realistic minimum of 8 medium instances before you even get to "how many nodes do I need for my site?".
Don't misunderstand us, if you're running a production workload this is VERY sane advice. There's nothing worse than trying to debug a down production cluster that's overloaded, late on a Friday night!
However, if you want to just learn Kubernetes, or maybe host a development/staging cluster for non-essential things - it feels like a little overkill, right? At least it does to us. If I want to fire up a cluster to see if my Kubernetes manifests (configuration for the deployments, etc) are correct - I'd rather not incur a cost of over a hundred dollars per month to do it.
They have released a number of pieces of software that are part of this ecosystem, for example Longhorn which is a lightweight and reliable distributed block storage system for Kubernetes. However, to bring us back to topic, they are also the authors of k3s .
Due to its low resource requirements, it's possible to run a cluster on anything from 512MB of RAM machines upwards. This means that we can allow pods to run on the master, as well as nodes.
And of course, because it's a tiny binary, it means we can install it in a fraction of the time it takes to launch a regular Kubernetes cluster! We generally achieve sub-two minutes to launch a k3s cluster with a handful of nodes, meaning you can be deploying apps to learn/test at the drop of a hat.
Both its reputation and adopton is growing rapidly too, with almost 13k Github stars since its launch in early 2019, whilst it was recently crowned the number 1 new developer tool of 2019 by Stackshare.
Well, kind of. When most people think of Kubernetes they think of containers automatically being brought up on other nodes (if the node dies), of load balancing between containers, of isolation and rolling deployments - and all of those advantages are the same between "full-fat" K8s vs k3s.
However, it's not all sunshine and roses, if that was the case everyone would be using k3s. So why not...
Firstly, currently (k3s v0.8.1) there is only the option to run a single master. This means if your master goes down then you lose the ability to manage your cluster any more (although all your existing containers will continue to run). If you want to run an external database platform, you can launch now with multiple masters. And there is some work being done for k3s v1.0 GA to support multiple masters natively within k3s.
Secondly, the default database in single master k3s clusters is SQLite. This is great for small databases not seeing much action, but can quickly become a major pain if they are being hammered! However, the changed happening in a Kubernetes control plane are more about frequently updating deployments, scheduling pods, etc - so the database load isn't too much for a small dev/test cluster.
If you want production, we'd recommend a full Kubernetes installation - we have a guide on installing Kubernetes on Civo with Kubespray.
If you want a learning playground, development or staging cluster, why not apply to join our k3s-powered, managed Kubernetes service - it's much cheaper, quicker to launch and you'll get some free credit each month to try it out while we are in beta!