Introduction

We have made much of why we went with Rancher’s k3s to underpin Civo’s managed Kubernetes service in posts such as Andy’s explanation of k8s vs k3s, but I wanted to take a bit of a deeper dive into k3s and why in particular it is a great technological choice for a service such as ours.

Civo Kubernetes

Overview - Kubernetes and what it does

Two trends are increasingly obvious in modern software development. Firstly, continuous deployment and rapid delivery of new versions of applications is now increasingly the norm over slow and massive version changes. Secondly, applications are deployed in ephemeral containers, often on virtual hosts located in efficient data centres rather than on one’s own premises.

To manage fleets of these containers, to make sure they respond to demand and are able to recover after outages or when encountering a bug, we need an orchestrating layer on top of these containers.

Enter Kubernetes, the rising star of the cloud-native world. Spun out as Kubernetes from an internal Google product in 2015, Kubernetes has had a meteoric rise in management, observability and application tooling since. But with this power to manage Google-scale applications come complexity and resource demands.

Rancher Labs’ k3s bills itself as a lightweight, but fully-compliant Kubernetes distribution, humorously saying it’s great for “Situations where a PhD in k8s clusterology is infeasible”. To complement the no-nonsense approach, it is ideally suited for quick deployments on pretty much any hardware, rather than only the intensive setups on which enterprise Kubernetes is commonly deployed.

In fact, it is simple enough to be deployable through a single fairly simple shell script if you were so inclined, but still entirely compliant, meaning that you can deploy any application to it, whether through the traditional Helm chart approach, or using something like the application marketplace on the Civo managed Kubernetes.

Why use k3s?

The first reason for recommending k3s comes down to sheer speed. If you are building test or dev environments in the cloud, as will be discussed below, you don’t want to wait for the cluster to come up before you can start layering services and other code on it every time you run your tests.

With the KUBE100 project we have managed to achieve a consistent 2-minute (or less) deploy time for clusters, and as Civo CTO Andy mentions here, 2020 will see us working to reduce this further with some clever but as-yet secret work.

Rancher k3s model

Related to the speed of deployment is the advantage of k3s’s lighter resource footprint. With a binary that weighs in at about 50MB, and lighter architecture than full traditional K8s with the master node contributing to compute tasks, you can run k3s Kubernetes with less beefy machines.

This obviously represents a saving in cost terms, as you will be able to achieve the same computing power with much lower technology overheads.

Where Kubernetes, in its original “full-fat” K8s form, was and continues to be designed for hyper-scale networks for the likes of Google (where it was originally conceived), most businesses operating clusters of containers do not necessarily need the full configuration and deployment options provided by it.

That said, k3s is fully upstream-compatible, just with certain lighter options, such as sqlite3 for storage rather than etcd out of the box. Why overcomplicate things when you can achieve the same result in an easier and lighter way? Of course, if you would prefer to set up etcd on your cluster, you can still do that with k3s, but only if you need to.

Use cases and scenarios

Experimentation and learning

Keep hearing about Kubernetes, or about a particular application like Linkerd? Want to have a quick play with a fully-functional cluster with the application set up alongside your cluster? Civo Kubernetes and our application marketplace will allow you to do just that, and allow you to launch a cluster with any number of applications within minutes, shortening the time to the fun stuff.

Or, are you completely new to Kubernetes, and don't want to spend time doing things The Hard Way, at least not yet? Spin up a cluster of your choice, keeping it bare-bones for understanding the inner workings of Kubernetes components, or watch your cluster with a tool like k9s while you deploy applications from the marketplace to see the changes they make, all within the time it takes you to eat a sandwich at lunch.

Additionally, since launching #KUBE100 to closed beta at the tail end of 2019, our community has put together some cool guides as examples of k3s usage.

CI/CD pipelines

A useful real-world application of a k3s cluster is in the field of continuous integration / continuous delivery. Whether you want to build a Continuous Deployment (CD) pipeline using Argo to redeploy an application whenever a build passes tests or integrate Kubernetes into a project on GitLab, our Kubernetes offering is ideal.

Resilient hosting

If you just want to run a blog in a Kubernetes environment and make sure that all traffic to your domains is secured with a wildcard certificate our beta testing community has graciously contributed their knowledge and experience to let you do just that.

Application authoring

As k3s is fully Kubernetes compatible, we even have a guide to the recently-released Helm 3 and using it to build a chart to deploy an express.js application.

And best of all, you can be up and running with any of the above within minutes - and the knowledge is transferable to other Kubernetes distributions.

Beyond the development environment

Outside the developer space, Rancher has detailed industrial use cases for k3s, illustrating its capability to run in production-critical environments such as in monitoring thousands of sensors on an oil rig.

The advantage of using a managed Kubernetes service is that it takes the headache out of server configuration. You can concentrate on application development and rapid prototyping without having to worry about the underlying infrastructure or how it’s run. You can just get a working endpoint to the API with a public IP address within 120 seconds, along with pre-configured applications of your choice.

Civo Kubernetes Marketplace

While we are still in beta, we envision our managed Kubernetes service as perfect for rapid prototyping, CI/CD runs and other developer scenarios where speed and performance (and cost) are critically important.

You can see how the service developed in the first months in this retrospective that highlights some of our community contributions as well.

Want to take k3s for a spin? Try it out!

We are expanding our managed Kubernetes service beta, and are accepting applications at https://www.civo.com/kube100. In return to your regular feedback, feature requests and contributions to our codebase, we offer $70 monthly credit for the duration of the beta and an exceedingly responsive and friendly community of Civo staff and other enthusiasts.

Sign up, give k3s a try, and you’ll see why we think it’s great. If you want to learn more about our service, take a look at some of the most frequently-asked questions about Civo Kubernetes.