Since we launched the world’s first k3s powered, managed Kubernetes service – we’ve had a lot of questions from our members on what the differences are between k3s and k8s (full blown Kubernetes), aside from the choice from each on how to capitalise a "K" (or not).

Before we continue, if you're not already a part of Civo sign up today and launch a cluster in just a few clicks - you'll even get $250 to get you started.

What is Kubernetes?

For those of you not in the know, Kubernetes is a "container orchestration platform". This effectively means taking your containers (everyone's heard of Docker by now, right?) and choosing which machine out of a group of them to run that container on.

It also handles things like upgrades of your containers, so if you make a new release of your website, it will gradually launch containers with the new version and gradually kill off the old ones, usually over a minute or two.

So what is K8s?

K8s is just an abbreviation of Kubernetes ("K" followed by 8 letters "ubernete" followed by "s"). However, normally when people talk about either Kubernetes or K8s, they are talking about the original upstream project, designed as a really highly available and hugely scalable platform by Google.

For example, here's an example of a Kubernetes cluster handling a zero downtime update while performing 10 million requests per second on YouTube.

The problem is that while you can run Kubernetes on your local developer machine with Minikube, if you're going to run it in production you very quickly get in to the realm of "best practices" with advice like:

  1. Separate your masters from your nodes - your masters run the control plane and your nodes run your workload - and never the twain shall meet.
  2. Run etcd (the database for your Kubernetes state) on a separate cluster to ensure it can handle the load.
  3. Ideally have separate Ingress nodes so they can handle the incoming traffic easily, even if some of the underly nodes are slammed busy.

Very quickly this can get you to 3 x K8s masters, 3 x etcd, 2 x Ingress plus your nodes. So a realistic minimum of 8 medium instances before you even get to "how many nodes do I need for my site?".

Don't misunderstand us, if you're running a production workload this is VERY sane advice. There's nothing worse than trying to debug a down production cluster that's overloaded, late on a Friday night!

However, if you want to just learn Kubernetes, or maybe host a development/staging cluster for non-essential things - it feels like a little overkill, right? At least it does to us. If I want to fire up a cluster to see if my Kubernetes manifests (configuration for the deployments, etc) are correct - I'd rather not incur a cost of over a hundred dollars per month to do it.

Enter Rancher’s k3s Kubernetes distro

Rancher Labs is a big player in the Kubernetes arena. Their flagship product Rancher is an amazing GUI for managing and installing Kubernetes clusters.

They have released a number of pieces of software that are part of this ecosystem, for example Longhorn which is a lightweight and reliable distributed block storage system for Kubernetes. However, to bring us back to topic, they are also the authors of k3s.

What is k3s and how is it different from k8s?

K3s is designed to be a single binary of less than 40MB that completely implements the Kubernetes API. In order to achieve this, they removed a lot of extra drivers that didn't need to be part of the core and are easily replaced with add-ons.

K3s is a fully CNCF (Cloud Native Computing Foundation) certified Kubernetes offering. This means that you can write your YAML to operate against a regular "full-fat" Kubernetes and they'll also apply against a k3s cluster.

Due to its low resource requirements, it's possible to run a cluster on anything from 512MB of RAM machines upwards. This means that we can allow pods to run on the master, as well as nodes.

And of course, because it's a tiny binary, it means we can install it in a fraction of the time it takes to launch a regular Kubernetes cluster! We generally achieve sub-two minutes to launch a k3s cluster with a handful of nodes, meaning you can be deploying apps to learn/test at the drop of a hat.

Both its reputation and adopton is growing rapidly too, with over 17k Github stars since its launch in early 2019, whilst it was recently crowned the number 1 new developer tool of 2019 by Stackshare.

Is k3s the same as k8s, just better?

Well, pretty much. When most people think of Kubernetes they think of containers automatically being brought up on other nodes (if the node dies), of load balancing between containers, of isolation and rolling deployments - and all of those advantages are the same between "full-fat" K8s vs k3s.

So what are the differences in using k3s?

Primarily, the default database in single control-plane k3s clusters is SQLite. The performance is great for small clusters, but can need replacing with something more powerul such as etcd, MySQL or PostgreSQL if a larger cluster is required! Fortunately k3s supports all of them (whereas upstream Kubernetes only supports etcd)!

The other real difference only really applies if you're one of the bigger cloud providers where you may have a lot of your extensions already in the upstream Kubernetes source code, because k3s removes all of those extensions and relies on standard interfaces such as the Container Storage Interface (CSI) for implementing them. This has no effect on end-customers though, only on the service provider itself.

Highlights of technical differences

Feature K3s K8s
Size Smaller footprint (less than 200MB) Larger footprint (hundreds of MB)
Dependencies Fewer dependencies More dependencies, including etcd, kube-proxy, etc.
Resource Usage Uses less resources (CPU, RAM, etc.) Uses more resources, especially for large clusters
Deployment Easier to deploy and manage More complex deployment and management
Configuration Simplified configuration with fewer options More complex configuration with many options
Scalability Limited scalability for large clusters Scalable to larger clusters and workloads
High Availability May have limitations for high availability Robust high availability options, including cluster-level redundancy, automatic failover, etc.
Features Fewer built-in features and extensions Wide range of features and extensions available, including service discovery, load balancing, automatic scaling, etc.
Security Fewer attack surfaces due to smaller codebase Larger codebase with more potential attack surfaces
Compatibility Limited compatibility with some Kubernetes tools, and extensions Strong compatibility with a wide range of Kubernetes tools and extensions
Use Cases Ideal for smaller, resource-constrained deployments, edge computing, and IoT Better suited for large, complex deployments with high resource requirements, such as big data, machine learning, and high-performance computing

Note that this table is still not exhaustive and there may be other technical differences between K3s and K8s that are not included here. Additionally, the suitability of either platform will depend on the specific needs of your deployment, so it's always a good idea to evaluate your options carefully before making a decision.

Should I choose k3s or k8s?

If you are looking for a lightweight, easy-to-use platform that is ideal for smaller deployments, resource-constrained environments, edge computing, or IoT, then K3s may be the better choice for you. With its smaller footprint, simplified configuration, and reduced resource usage, K3s can help you quickly deploy and manage containerized applications in a more efficient and cost-effective manner.

On the other hand, if you are working with large, complex workloads that require high scalability, performance, and availability, then K8s may be the better choice for you. With its robust features, extensive ecosystem, and wide range of extensions, K8s can help you easily manage and orchestrate even the most complex containerized applications.

It's also worth noting that both K3s and K8s have their strengths and weaknesses and may be more suitable for certain use cases over others. Ultimately, the best choice will depend on your specific needs, resources, and goals, so it's important to carefully evaluate your options and choose the platform that best meets your requirements.

More question about kubernetes?

Looking to learn more about Kubernetes and container orchestration?

We've got you covered! Check out our articles on Kubernetes vs Docker and Understanding K3s for a comprehensive comparison and deeper insights into these powerful platforms.

But why stop there?

Take your knowledge to the next level with our free Kubernetes course, complete with demos and real-world scenarios to help you master this essential tool.

Ready to put your skills to the test?

Why not join our k3s-powered, managed Kubernetes service? With fast, affordable deployment options for development, staging, and even production, we've got everything you need to get up and running in no time. Sign up now and experience the power of Kubernetes for yourself!