Note
Please note that this guide was originally published in 2018 and refers to using Kubespray
to set up Kubernetes. If you are using Civo's managed Kubernetes solution, this guide will not be applicable. Please see our managed Kubernetes FAQfor more information about the service.
Introduction
We get a lot of requests at Civo asking if we support containers such as "Docker" or "Kubernetes" and how do they integrate with our systems. The good news is I can confirm that we support containers on the Civo platform and in this guide we will be setting up a basic Kubernetes cluster to get you started on your path of awesome containerisation.
Launching the nodes
For this guide we are going to be launching 4 medium size instances all running on Ubuntu 16.04LTS. The reason we are launching fairly large instances is because of the extra overhead Kubernetes puts on the operating system. You can run it with a smaller size instance, however we recommend using the medium size for best results. We recommend naming the instances something like node-N.public.k8s.example.com
, this allows you to have the namespace to have a private Kubernetes cluster later on. For the purpose of this guide all of our instances are going to be called n1.public.k8s.example.com , n2.public.k8s.example.com , n3.public.k8s.example.com and n4.public.k8s.example.com
N.B: It is important to have root user and SSH access to the instance(s) when creating them.
Once you have launched your instances it will be best to make sure all packages are up to date. On each machine run the following:
apt update
apt upgrade -y
Once updated we recommend you setup DNS for each instance to point to each floating IP. If you are using Civo for your DNS, you can follow the guide here: Civo DNS guide
Installing Kubernetes
We now need to install Kubernetes onto our instances. To do this we are going to be using Kubespray
which is essentially a set of ansible playbooks that just make the initial setup and configuration easy and repeatable. We have our own forked Kubespray repository that you can clone here: Civo Kubespray - Our forked repository doesn't have anything different from the main Kubespray repository. We have simply tested this commit on Civo and we know it works as of that commit.
On your local machine do something similar to the following:
cd ~/repos/
git clone https://github.com/absolutedevops/kubespray.git
cd kubespray
git checkout -b v2.5.0 tags/v2.5.0
We now need to copy the following directory:
cp -r ~/repos/kubespray/inventory/sample ~/repos/kubespray/inventory/civo
We now need to edit the hosts.ini
file and update it with your own kubernetes node information. The file you need to edit is located here: ~/repos/kubespray/inventory/civo/hosts.ini
. In our example it looks like this:
[kube-master]
n1.public.k8s.example.com
n2.public.k8s.example.com
[all]
n1.public.k8s.example.com ansible_ssh_host=n1.public.k8s.example.com ansible_ssh_user=root
n2.public.k8s.example.com ansible_ssh_host=n2.public.k8s.example.com ansible_ssh_user=root
n3.public.k8s.example.com ansible_ssh_host=n3.public.k8s.example.com ansible_ssh_user=root
n4.public.k8s.example.com ansible_ssh_host=n4.public.k8s.example.com ansible_ssh_user=root
[k8s-cluster:children]
kube-node
kube-master
[kube-node]
n1.public.k8s.example.com
n2.public.k8s.example.com
n3.public.k8s.example.com
n4.public.k8s.example.com
[etcd]
n2.public.k8s.example.com
n3.public.k8s.example.com
n4.public.k8s.example.com
We now need to set the overlay network to flannel
. You can use any of the other network overlay options that Kubernetes offer, however for the purpose of keeping things simple we will use flannel. In the file ~/repos/kubespray/inventory/civo/group_vars/k8s-cluster.yml
, find the entry for kube_network_plugin
and change its value to flannel
shown below:
kube_network_plugin: flannel
To allow us to run the Kubespray playbooks we need to make sure that you have ansible installed locally with a minum version of 2.4.2
. You can check the version of ansible you are running with:
ansible --version
If you are running a version older than 2.4.2 you can upgrade with:
pip install ansible --upgrade
We also need to ensure that each of our Kubernetes nodes has python installed. To do this simply run the following on each node:
apt install python
We are now ready to run the first setup playbook for Kubespray. Make sure you are able to connect as the root
user to each of your Kubernetes nodes by the hostname without specifying a password. Once confirmed we then need to run the following from our local machine:
pip install -r requirements.txt --user
cd ~/repos/kubespray/
ansible-playbook -i inventory/civo/hosts.ini cluster.yml
The process is likely to take around 10 minutes to complete.
Accessing the the cluster from your local machine
Now we have the cluster setup we want to connect to the cluster to make sure it's working as we expect. First we need to make a folder on our local machine called ~/.kube
and download the configuration from one of the nodes in the cluster.
mkdir ~/kube
scp root@n1.public.k8s.example.com:.kube/config ~/.kube/config
We now need to install the kubectl
tool. To do this on macOS simply do:
brew install kubectl
If you are running Ubuntu Linux you can install the kubectl
package by doing the following:
sudo apt-get update && sudo apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubectl
Once installed we need to adjust the hostname used for the server as the default config uses one of the internal IP addresses.
kubectl config set-cluster cluster.local --server=https://n1.public.k8s.example.com:6443
We can now test this is working by running the following:
kubectl get pods -n kube-system
We should get an output similar to below:
NAME READY STATUS RESTARTS AGE
kube-apiserver-n1.public.k8s.example.com 1/1 Running 0 18m
kube-apiserver-n2.public.k8s.example.com 1/1 Running 0 18m
kube-controller-manager-n1.public.k8s.example.com 1/1 Running 0 19m
kube-controller-manager-n2.public.k8s.example.com 1/1 Running 0 19m
kube-dns-79d99cdcd5-28d95 3/3 Running 0 18m
kube-dns-79d99cdcd5-xfprj 3/3 Running 0 17m
kube-flannel-b5ph8 2/2 Running 0 18m
kube-flannel-kdx8n 2/2 Running 0 18m
kube-flannel-n9dzv 2/2 Running 0 18m
kube-flannel-z2pnv 2/2 Running 0 18m
kube-proxy-n1.public.k8s.example.com 1/1 Running 0 19m
kube-proxy-n2.public.k8s.example.com 1/1 Running 0 18m
kube-proxy-n3.public.k8s.example.com 1/1 Running 0 19m
kube-proxy-n4.public.k8s.example.com 1/1 Running 0 19m
kube-scheduler-n1.public.k8s.example.com 1/1 Running 0 19m
kube-scheduler-n2.public.k8s.example.com 1/1 Running 0 19m
kubedns-autoscaler-5564b5585f-9hhjr 1/1 Running 0 18m
kubernetes-dashboard-69cb58d748-dfpmm 1/1 Running 0 17m
nginx-proxy-n3.public.k8s.example.com 1/1 Running 0 19m
nginx-proxy-n4.public.k8s.example.com 1/1 Running 0 19m
Excellent! We have a basic cluster up and running.
Securing the cluster with a firewall
Now the cluster is set up, we recommend using a firewall to allow access to the kubernetes API to IP addresses that you trust. You are able to use any firewall you wish however for this example we are going to use UFW
First we need to install the firewall:
sudo apt install ufw
We then are going to allow SSH and Kubectl traffic from a trusted IP address. In our example we are using a local IP address, you can replace this with any IP address you wish to access the cluster from:
sudo ufw allow from 192.168.0.2/32 to any port 22
sudo ufw allow from 192.168.0.2/32 to any port 6443
We are now going to disable all other ports by default for incoming and allow all outbound traffic:
sudo ufw default deny incoming
sudo ufw default allow outgoing
Finally turn on the firewall:
sudo ufw enable
# type "y" and press enter
sudo ufw status
We now need to enable all inbound web traffic if you are using apache or nginx for example:
sudo ufw allow 443/tcp
sudo ufw allow 80/tcp
We are also going to want to allow traffic on the Flannel overlay network and the service network (from the inventory/civo/group_vars/k8s-cluster.yml file
in the Kubespray repository under kube_pods_subnet
and kube_service_addresses
):
sudo ufw allow from 10.0.0.0/8
Fixing the Civo/Docker networking
As described in a previous learn guide there is a problem with Docker where it doesn't recognise that it is running on a NIC with a non-standard MTU. So we need to take some manual steps to fix the issue. SSH on to each instance and do the following:
- Edit the
/lib/systemd/system/docker.service
file, find the line containingExecStart=/usr/bin/dockerd -H fd://
and insert--mtu 1450
before the-H
. - Run the command
sudo systemctl daemon-reload
- Run the command
sudo service docker restart
We are now all done and have a fully working 4 node Kubernetes cluster.