Multi node clusters using kubeadm + containerd
Configure a Multi Node Cluster with Kubeadm & Containerd
With Civo Academy, learn how to configure and join multi node clusters using Kubeadm and Containerd.
What is Kubeadm?
The next option that we'll be looking into to test your Kubernetes cluster is called Kubeadm. Kubeadm is a real-time setup, using which we can set up a multi-node Kubernetes cluster. It is very popular, and you can have multiple VMs on your machine and configure the Kubernetes master and its node components. If you have limited resources but want to use Kubeadm, you can use some cloud-based virtual machines.
Requirements for Kubeadm
We will be using Civo instances, but before we get started with that, let's look a bit more into the requirements for a Kubeadm. https://kubernetes.io/docs/home is the official website of Kubernetes Docs. It mentions compatible Linux hosts, for example, a distribution like Debian, RedHat, etc. You can even use Ubuntu. So, that's the first requirement. After that, two gigabytes or more RAM per machine, two CPUs or more, and full network connectivity between all the machines in the cluster, public or private network. You will also need unique hostnames, MAC addresses, and the product_uuid for every node.
You can check out this example from the above picture for more details. Specific ports need to be open on your machine. This is the crucial part, which is that swap should be disabled. And the reason for this is something that we'll also discuss when we are doing the hands-on coding part.
So here are a few prerequisites that you need to verify. You can do those via these commands. You can also let your iptables see bridged traffic. Further, you can check the required ports and see the purpose of what these particular ports will be used for, for which service they are going to be used.
We're going to be installing Kubeadm, Kubelet, Kubectl. We will also show you how to install Containerd because we need to install a runtime. You can use Docker, Containerd, CIR-O. We'll use Containerd, and the instances on the Civo cloud in this particular session and will show you how easy setting up your instances on the Civo cloud is. So, we'll be having a control plane, and we'll be having two nodes, for example. So in other terms, we can say a master node and two worker nodes. So, without further ado, let's get started.
Creating an instance
The first thing that I'm going to do is go to https://dashboard.civo.com/instances on my account, or if you're at the dashboard, you can click on the Compute Instances button. I'm going to create three instances over here. I'll create my first instance, and I will give it a name, for example, Control-Plane. Also, I will create one instance of this. Let's keep the size as Medium, and I'm going to select the base image as Ubuntu. I will keep the rest of everything as default. Next, I will click on Create. We will create two more after this. I'll create one more instance, and we will call this one, Node-1. It will be of small size, and I can have the base image as Ubuntu. Then, I will create this. I will now create a second one. So this one will be called Node-2. Again, you can select a small or any other size you like. Then I will choose Ubuntu and everything else I will keep by default.
In the instances section, you can see that three instances are being created. The last two are getting started, but you can see how easy it was to start a particular instance. For example, I click on the first instance, and now, I can work with this. You will be able to see the IP Address of the first instance. I can SSH into it. I can put a command like
ssh root@ URL. In place of the URL, you have to put whatever IP we have over here for the host. It will now say whether I will connect, and I will respond yes. After this, it's going to ask for a password. So, the password is something you can find by clicking on SSH information on the right-hand side of the screen. Then, you can click on the View SSH Information. The password can be seen hidden. Hence, I will copy this, and then I'll paste it over here. How simple was that? Now I'm in my particular instance on my local machine. I'm controlling it via SSH.
Creating Containerd configuration file
That was the first step. For the next step, we will be installing some packages. Then, if you want to SSH into each node and create a Containerd configuration file, you must execute the below command.
I will copy and share it over here. Now you will be able to see that this command will instruct the node to load the overlay and the br_netfilter kernel modules. We also have to restart our nodes to load them. Instead of restarting, we can run the below commands such as
sudo modprobe overlay and
sudo modprobe br second module to load the modules immediately. The next step will be to set the system configuration for the Kubernetes networking. You can set your bridge, IP, and everything with the below command.
We have to apply these settings by executing the sysctl command on the system.
With the command
sudo sysctl -system, these are now applied. The next step is now the critical part, which is installing Containerd. First, I will update the AppKit and install Containerd using the command
sudo apt-get update && sudo apt-get install -y container. The next step is that inside the /etc folder, we will create a configuration file for Containerd. Then, we will generate the default file for that and copy-paste that. Hence, the first thing I'm going to do is create a Containerd file using the command
sudo mkdir -p /etc/containerd. Next, I will generate the default configuration by using the command
sudo containerd config default | sudo tee /etc/containerd/config.toml. Now, you will see that our default configuration is set. Next, I have to restart my Containerd to ensure that the new configuration file is being used. The command you will use here is
sudo systemctl restart containerd.
Installing dependency packages
Remember when we discussed in the previous video that we need to disable swap memory? This is because the scheduler in Kubernetes data means the best available node on which to deploy the ports that will be created. If the memory swapping is allowed on your host system, this will lead to some performance and stability issues within Kubernetes. For this particular reason, when we're running Kubernetes, it will require us to disable the swap memory. In Linux, you can do it simply with the command
sudo swapoff -a. Now, we have to install a few dependency packages. To install some dependencies, we will install apt-transport-https here using the command
sudo apt-get update && sudo apt-get install -y apt-transport-https curl the next step is basically to download and add the GPG key. I will download the key from the
packages.googlecloud by using the command
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - and will add that. The next step is to add the Kubernetes to the repository list. Hence, I will add that by using the command
Updating package listings and installing Kubeadm, Kubectl and Kubelet
The next thing we will do is update the package listings. The command for that is
sudo apt-get update. After that, we will install the Kubernetes package. Notice that if you're using a dpkg and getting a dpkg lock message, wait for a few minutes before trying the command again. I will run the
sudo apt-get update command. With that, it is installing the Kubernetes package right now. Now, we will be installing Kubelet, Kubeadm, and Kubectl. The command for the installment is
sudo apt-get install -y kubelet=1.20.1-00 kubeadm=1.20.1-00 kubectl=1.20.1-00. I am installing a specific version of Kubelet, Kubeadm, and Kubectl. The next thing we will do is turn off the automatic updates by putting these three on hold. With the command
sudo apt-mark hold kubelet kubeadm kubectl, it will put these on hold, and you can now see it's going to turn off the automatic updates.
Initializing the cluster
Now, the next step we will do is initialize our cluster. This particular step only needs to be done on the control plane node or the master node. If you have multiple control plane nodes, you can do the same thing on each of these. I'm going to start it by doing an init over here with the Kubeadm init command to initialize my particular cluster:
sudo kubeadm init -pod-network-cidr 192.168.0.0/16. You will see a Kubeadm join message after executing the command, which is an important one. You have to keep a note of this thing. Open a notepad and copy-paste it for future use.
Setting up the Kubectl access
The next thing we have to do is to set the Kubectl access. I have to make a directory for this, and I will copy using the
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config command. Now, I have to give it access. I will do that by using the command
sudo chown $(id -u):$(id -g) $HOME/.kube/config. To test how that is working, you can check using the command
kubectl version. Our particular Kubectl under the cluster is now working. We will now be able to access the cluster. We can also install the Calico network add-on. Calico provides simple and high-performance secure networking, and many major cloud providers trust it. We can install the networking on the control plane by using the command
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml. We can send in a file over here. If you want to see all the components and their installation, you can do that by using the command
kubectl get pods -n kube-system. Now you can see that it has all these pods.
Configuring the two worker nodes
The next thing we need to do is that in the worker node, we have to do this entire process again. We have to go back to our worker node. We will watch this for just one node, but you can do it for as many as you like. I'm going to copy the URL. I'm going to SSH into it using the command
ssh route@ URL. Next, I will type yes and will add my password. After that, you have to do the exact same thing as the previous one. I have to install packages, and we will restart the nodes. Then we have to set the system configurations and apply these settings. After that, we will install Containerd. Currently, I'm doing the same thing I did earlier because we will be running Kubeadm on this. In this particular node also, we need to have Kubeadm. But we will not initialize the cluster over here. The previous command that we got, the joint command, is something that we will be using over here to connect it without the control plane. After this, I have to restart Containerd, turn off the swap memory, install the dependencies, download the GPG key, add Kubernetes to the repository list, update the package listings, and install Kubelet, Kubeadm, and Kubectl in the end. Last but not least, we will also have to turn off the automatic updates. So, in a nutshell, it's setting up Kubeadm and turning off the automatic updates.
Joining all the nodes
We already did the initialization of our cluster, and as a result, we don't have to do it again. The next thing that we need to do is that we have to join using the command that we got previously, which is
sudo kubeadm join for the two worker nodes. Here, you will need the join message you previously copied in a notepad. Now you can see that this node has joined the cluster. Also, it says that if we run the command "kubectl get nodes", we will see those particular nodes. So we have a master node and two worker nodes. That was it about this particular installation, and I hope you could set up Kubeadm. You can now set up more nodes, play around with them, and learn more about how the networking of the nodes is working. I hope you enjoyed this particular video. We'll see you at the next one.