If you are looking into the cloud-native world, then the chances of you stumbling across the term “Kubernetes” is high. Kubernetes, also known as k8s, is an open-source system that has the primary responsibility of being a container orchestrator. This has quickly become a lifeline for managing our containerized applications through creating automated deployments. It has been commonly used by cloud-native developers as the main API to support building and deploying reliable and scalable distribution systems.

Throughout this article, we will begin to understand what k3s is, including its architecture, setup, and uses. This will outline how you can get started using k3s and launch your first cluster in under 90 seconds.

What is k3s?

Over the years, Kubernetes has been developed to allow users a quicker, simpler and more cost effective way to launch clusters. In 2019, SUSE (formally Rancher Labs) launched ‘k3s’ to create a seamless version of Kubernetes which did not have the same time restraints when launching a cluster. By removing sections of the Kubernetes source code that were not typically required, Rancher Labs was able to create a single piece of binary that became a fully-functional distribution of Kubernetes.

Since being introduced, k3s has become a fully CNCF (Cloud Native Computing Foundation) certified Kubernetes offering meaning that you can write your YAML to operate against k8s and it will also apply against a k3s cluster. K3s is now known as a fully compatible and CNCF conformant Kubernetes distribution, packaged into a single piece of binary that includes everything you need while running Kubernetes. Through Civo Academy, you can learn more about the background to k3s in our “Kubernetes Introduction” module.

What does k3s stand for?

‘K8s’ represents Kubernetes which is a 10-letter word with 8-letters situated between the ‘K’ and ‘S’. As k3s is the simplified version of k8s, Rancher Labs designed the name to be ‘half as big’, with only 5-letters and 3-letters sitting between the ‘K’ and ‘S’.

Is k3s the same as Kubernetes?

Both k3s and Kubernetes can be classified as container orchestration tools which share a lot of the same features. k3s has some additional features that makes it more efficient than Kubernetes such as automatic manifestation of deployments, single node or master node installations, and better performance with large databases such as MySQL and PostgreSQL.

If you are interested in learning more about how k3s is different from k8s, check out our “k3s vs k8s” article which underpins these differences.

Who is able to use k3s?

K3s is small in size and highly available, and as a result, it can be used for creating production-grade Kubernetes clusters in resource-constrained environments and remote locations. Hence, it is perfect for edge computing and IoT devices like Raspberry Pi. The continuous Integration automation processes highly favour k3s because of its lightweight nature along with fast and simple deployment characteristics.

How does k3s work?

The foundation of k3s is built upon simplicity for developers to get fully-fledged Kubernetes clusters running in a short amount of time. Similar to k8s, it runs off the API server, scheduler, and controller, however, in k3s, Kine is used as a replacement for etcd for storage of databases. Other new additions to k3s include SQLite, Tunnel Proxy, and Flannel. In k3s, all of the components run together as a single process which makes it lightweight. This compares to in k8s, where each component runs as a single process. This change in k3s allows us to spin up a single cluster in a few seconds as both the server and the agent can run as a single process in a single node.

What is the k3s architecture?

K3s was designed to support two processes: the server and the agent. Through the server, k3 is able to run several components such as the basic control-plane components for Kubernetes (API, controller, and scheduler), sqlite, and the reverse tunnel proxy. Then there is the k3s agent which runs the kubelet, kube-proxy, container runtime, load-balancers, and other additional components.

Diagram by Rancher in their Introduction to k3s blog

What are the benefits of k3s?

When created in 2019, k3s was designed to be the lightweight version of Kubernetes allowing users to seamlessly launch a cluster without the need for unused source code. This was built upon three main factors: lightweight, highly available, and automation.

Lightweight

As previously mentioned, k3s is designed to be a single binary of less than 40MB, which can implement the Kubernetes API. This low resource requirement makes it possible to run a cluster on anything from 512MB of RAM machines upwards. These factors typically allow for clusters to be deployed in under two minutes with a handful of nodes.

Highly available

Over time, edge and Iot platforms have begun to transform the way that containers operate and are managed. Both edge and IoT devices typically have smaller resources which makes Kuberentes appear too large. When Rancher designed k3s to be lightweight, this made it easy to install and distribute across edge and IoT environments by removing lines of code that weren’t required.

Automation

As k3s is fast and lightweight, it highly facilitates CI automation. In addition, continuous Integration helps reproduce production components and infrastructure on a smaller scale. CI automation gets benefited the most when quick, lightweight, and simple things come into action, and k3s is the perfect candidate for it. It gets installed in a few seconds with a single command and is easily manageable as a part of the CI automation process.

How to get started using k3s?

Unlike with k8s where deploying a cluster can take up to 10 minutes, you can get started using k3s and deploy a working cluster in under 90 seconds. To begin understanding how you can get started with using k3s, we will outline how to install k3s and what your next steps should look like.

How to install k3s?

There are several ways to create a cluster using the Civo k3s. You can get started with Civo k3s by creating a cluster from the Civo user interface, or you can use the Civo command line interface (CLI) and the different Civo providers to create one.

Civo user interface

To create a cluster using the Civo user interface, you must create an account and log into the Civo dashboard. Then, in the dashboard portal, click on the Kubernetes tab under the dashboard tab, and then click on “launch a cluster.” After that, you will be instructed to give the name of the cluster and determine the nodes. You will also select the CPU size according to your needs, and you can select the applications you want to install from the marketplace. Finally, you have to click the “create cluster” button to make a cluster.

Civo providers

You can use the Civo Terraform provider, where you can take the infrastructure as a code approach to create a cluster. Again, Civo has a Pulumi provider where you can use different programming languages to provision and maintain your infrastructure.

Civo CLI

Written in the Go programming language, Civo CLI is open-source and can be installed on different operating systems. You can learn how to install the CLI from the Civo GitHub page. After installing the CLI, you have to save the API key. The API key is one of the essential elements you will need for provisioning your infrastructure. To get the API key, you have to go to the user settings in the Civo dashboard and click on the security section. Copy the API key and save it using the following command:


civo apikey save [API_KEY_NAME] “[API_KEY]”

Replace the APIKEYNAME with a name of your choice and the API_KEY with the API key you copied earlier. You can verify the saved API key with this command:


civo apikey ls

Next, you have to check out the available regions for the cluster. Write the following command to do that:


civo region ls

The above command lists all the available regions in which you can create a cluster. Now, to create a default cluster, you have to write the below command:


civo k3s create

You can verify the cluster's creation and status with the civo k3s ls command. The “create” command will create a default cluster with a single node and take the default region. The default size of a cluster is medium and you can scale up and down the size according to your needs. After that, you will need to save the cluster's configuration file. In order to do that, write the below command in your terminal:


civo k3s config [CLUSTER_NAME] --save -p=“[PATH]”

Replace the CLUSTER_NAME with the name of your cluster and the PATH with the path where you want to save the Kube-config file. Finally, we will see how to create a multi-node cluster along with an application from the Civo marketplace. We will create a cluster containing the Rancher application and two nodes. Execute the following command in your terminal to create a multi-node cluster:


civo k3s create -a Rancher -n 2 -w

The above command will create a double node cluster with a default name and size along with the Rancher application installed. In this way, you can create a single node and a multi-node cluster using the Civo CLI within seconds.

Next steps

If you are interested in hearing more about the future of k3s, our latest meetup with Kunal Kushwaha, Kai Hoffman, Dinesh Majrekar, and David Fogle dives deeper into this topic and answers your burning questions.

To get started learning more about k3s, why not join our k3s-powered Kubernetes service and have your first cluster deployed in under 90-seconds.