This blog post is an adaptation of a talk I gave at the Cloud Native meetup in Birmingham in the UK in February 2020. It details the advantages of k3s, a lightweight Kubernetes distribution we have deployed as part of a managed Kubernetes service. Developed by Rancher Labs, k3s allows for quick deployments for testing, CI/CD runs and getting to grips with Kubernetes without having to commit to large-scale infrastructure and the costs that would bring.
k3s is best considered a Kubernetes distribution. It is not a fork, as it does not, and is not intended to, diverge from the main Kubernetes codebase. The same code that powers all the individual Kubernetes components is the same in k3s as it is in the full-fledged Cloud Native Computing Foundation-hosted Kubernetes project (K8s). Instead, what is different about k3s is the way it packages the software to orchestrate containers.
The three philosophies underpinning k3s are:
The three philosophies all play together: By having an ethos on the most efficient set of components, Rancher can make the most lightweight and resource-economical product, while ensuring the components that make it up are fully compatible with the main Kubernetes project.
k3s architecture from Rancher Labs
So, compared to the original Kubernetes project, k3s differs in important ways to achieve the aims and philosophy above. The main ones are:
All Kubernetes control plane components combined into a single binary
Cross-compiled for ARM
Support only Container Storage Interface-compatible storage, and drop cloud provider plugins that are present in the in-tree distribution
Automatic certificate creation and rotation
Bundled user-space tools, such as
Bundled commonly-found and usability-increasing components such as CoreDNS, Metrics-server, and an ingress controller
KINE to manage the key-value store on a database of your choice instead of etcd
Given the differences to in-tree Kubernetes, you'd be forgiven for thinking that k3s is an entirely different product and is composed of a lot of code separate from upstream Kubernetes that is difficult to maintain. As it happens, the unique parts of k3s represent only 1000 lines of code overall!
by Markus Spiske on Unsplash
This is due in no small part to some very clever k3s developers both at Rancher and in the wider community, but also due to a natural convergence of the projects. In fact, k3s code has been merged into the upstream Kubernetes codebase, keeping the projects tightly coupled.
To simplify cluster set-up and allow you to move to the "fun part" of actually deploying your applications in Kubernetes, k3s also takes care of certificate generation and rotation. This automates TLS encryption between the nodes in your cluster, and is one less headache for you.
At Civo, we obviously believe in Kubernetes as the future way to deploy and manage cloud computing workloads. Just like the container paradigm changed our understanding of shipping code and being able to have it deploy consistently, Kubernetes is changing the thinking around managing these containers in the real world.
To this end, we believe in the value of an easy-to-deploy and efficient Kubernetes service that serves a variety of use cases. Whether you want to learn about container orchestration, test out a new application, deploy a CI/CD pipeline for your projects, or smoothly scale your application for high availability, you shouldn't have to worry about the underlying nitty-gritty unless you want to.
By leveraging the efficiency of k3s, the Civo Kubernetes platform achieves two things that are both important for developers: Speed of deployment and economy of resources.
The fact the k3s installation includes user tools as well as an ingress controller and metrics out of the box means you can be up and running with your applications quickly. Of course, being fully compatible with base Kubernetes means that should you wish to apply your own components to your cluster, you are free to do so - a custom Helm operator is included! In the world of the CNCF landscape, where hundreds if not thousands of applications exist, there is an advantage to having a proven set bundled in by default.
Another speed benefit from the unified approach of k3s arises from the low system requirements. This means not only does it include everything you need to get started with your cluster, but it also deploys quickly. We regularly see launch times of newly-provisioned clusters in around two minutes, and are working on a back-end platform upgrade which aims to speed this up even further.
The economy of resources allowed by k3s is such that you can actually run it on edge devices, or in the case of a cloud-based service such as ours, have a single node if you so wish. In the k3s system, the master node contributes to work rather than acting solely as the controller plane. This represents a cost saving to you in real terms, as the computing capacity of the master node will be used most efficiently.
We would love to hear about what you build, your learnings and discoveries, as well as what you would like to see the platform provide. If you have any questions or comments, feel free to reach out to us on Twitter - find us @Civocloud and me at @andyjeffries.