Introduction

Containerisation of applications, and Kubernetes to manage them, are a revolutionary force in software. The power that this change in software development and delivery gives us is the chief reason a shift to Kubernetes is happening.

In more traditional cloud computing, applications have been running inside virtual machines. The containerization era has led to the development of microservices that can run as containers, self-contained units that have all the components they need to run. But there are scenarios where certain components of your application still need to run as full-fledged virtual machines, or you may be running applications that cannot be run as containers at all due to their complexity or third-party integrations, but you may still want to benefit from the power and features of Kubernetes. This is where KubeVirt comes in!

KubeVirt is a Kubernetes extension that allows running traditional Virtual Machine (VM) workloads natively side by side with container workloads.

Why KubeVirt ?

Diagram showing duplicated logging, metrics, monitoring, scheduling and networking configurations between a virtual machine workload and a containerised workload

When you are managing two separate infrastructures for your containerised workload and VM workloads then you maintain separate layers of logging, monitoring, metrics, scheduling capabilities and networking, as shown above.

Maintaining a single strand of configuration by scheduling VM workloads within Kubernetes using KubeVirt

With KubeVirt you get to leverage the power of Kubernetes to run VM workloads alongside container workloads and get the same benefits including:

  • Declarative  -  Use the declarative approach to create a VM by creating a custom resource of kind VirtualMachine. This gives you have similar experience of creating a pod, deployment or any other Kubernetes object.
  • Kubernetes power  -  You get the benefits of Kubernetes: the scheduling capabilities, resource requests and limits, and the same networking as you have for pods.
  • Storage  -  You can declare and use PersistentVolumeClaims (PVC) as disks inside the VM.
  • Observability  - You can use the same observability tools for your VM workloads as you have for your containerised workloads. You can have the same systems for logging, allowing you to create an integrated dashboard and alerting.
  • Same infrastructure  -  Since the VM and the container workloads will be running side by side, you need not maintain two separate infrastructures.
  • Pipelines -  You can leverage different tools like Tekton to create tasks in a pipeline and have a task for Operating System (say, Windows) and then upload the artifacts to a PVC. Like this, you can have the different tasks with your VM, and make the most of the power of Kubernetes.
  • Traditional VM use cases  -  You can leverage Kubernetes for all your traditional VM use cases like Virtual Desktops (VDI), infrastructure-as-a-service, etc.

Additional Use-cases

  • VM's and containers running side by side.
  • Kubernetes on Kubernetes :  at Civo we leverage KubeVirt on our supercluster to power the Civo managed Kubernetes offering.
  • Harvester  -  An open-source HCI(Hyperconverged infrastructure) solution that is powered by KubeVirt.

KubeVirt basic architecture

Simplified architecture diagram showing an application pod coexisting alongside a KubeVirt pod running a KVM + QEMU container

The architecture looks pretty simple: KubeVirt is installed on top of Kubernetes. KubeVirt is a KVM + QEMU process running inside a pod. It is container runtime interface (CRI) independent, meaning it will work with Docker, Crio, Containerd, etc. Let us try to understand the KubeVirt components with the help of VM launch flow.

VM launch flow

Representation of the process that happens on a node when a kubectl apply command is run to start a Virtual Machine with KubeVirt

The image above shows the scenario where a user will create a VirtualMachine object, which makes sure that a VirtualMachineInstance (VMI) object is created inside a cluster. The VirtualMachine object provides the additional capability of starting/stopping the VMI and making sure that the VMI is always running when it should be.

Users can directly also give the VMI object via custom resource and the virt controller which is a cluster-level component that will pick up the object and spin up a pod for the VMI to live in. After the pod gets scheduled onto the node, the Kubelet is going to spin up the pod and there are two components that live on the node:

  • virt launcher  which is responsible for starting up the QEMU process, monitoring it, and a few other tasks like live migration, etc.

  • virt handler  which is a  privileged Daemonset. It guides the virt-launcher to launch the VMI. It monitors the VMI and ensures the corresponding libvirt domain is booted or halted accordingly.

Another component virt-api: it provides a HTTP RESTful entrypoint to manage the virtual machines within the cluster.

That is a summary of how a VM launches via KubeVirt.

VM Live migration

In the following section, we will be launching a Kubevirt migration demonstration running on Civo Kubernetes. It assumes you have a Civo account and have both Kubectl and the Civo CLI tool installed.

Civo Kubernetes cluster creation

The following command will create a 5 node cluster named kubevirt-demo.

$ civo k3s create kubevirt-demo - nodes 5 - size "g3.k3s.large"
The cluster kubevirt-demo (97a05a0b-b2d5–4368–8254–2a53a7ea96cb) has been created

We will need to get the Kubeconfig for the cluster and save it to our desired location. If you do not specify a path, it will save it to the default location of ~/.kube/config.

$ civo k3s config kubevirt-demo - save - local-path /Users/saiyampathak/civo/test/kubevirt.config
Access your cluster with:
KUBECONFIG=/Users/saiyampathak/civo/test/kubevirt.config kubectl get node

Let's make sure that kubectl knows to use our cluster's configuration file:

$ export KUBECONFIG=/Users/saiyampathak/civo/test/kubevirt.config
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k3s-kubevirt-demo-828552e1-node-pool-e483 Ready <none> 83s v1.20.2+k3s1
k3s-kubevirt-demo-828552e1-node-pool-01f6 Ready <none> 81s v1.20.2+k3s1
k3s-kubevirt-demo-828552e1-node-pool-d783 Ready <none> 80s v1.20.2+k3s1
k3s-kubevirt-demo-828552e1-node-pool-198c Ready <none> 78s v1.20.2+k3s1
k3s-kubevirt-demo-828552e1-node-pool-11ae Ready <none> 83s v1.20.2+k3s1

KubeVirt installation

We will now run the requisite commands to create a Kubevirt installation on our cluster, using the latest stable version:

$ export VERSION=$(curl -s https://api.github.com/repos/kubevirt/kubevirt/releases | grep tag_name | grep -v - '-rc' | sort -r | head -1 | awk -F': ' '{print $2}' | sed 's/,//' | xargs)

# Check that the $VERSION variable shows the current KubeVirt version number
$echo $VERSION

$ kubectl create -f https://github.com/kubevirt/kubevirt/releases/download/${VERSION}/kubevirt-operator.yaml

$kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${VERSION}/kubevirt-cr.yaml
namespace/kubevirt created
customresourcedefinition.apiextensions.k8s.io/kubevirts.kubevirt.io created
priorityclass.scheduling.k8s.io/kubevirt-cluster-critical created
clusterrole.rbac.authorization.k8s.io/kubevirt.io:operator created
serviceaccount/kubevirt-operator created
role.rbac.authorization.k8s.io/kubevirt-operator created
rolebinding.rbac.authorization.k8s.io/kubevirt-operator-rolebinding created
clusterrole.rbac.authorization.k8s.io/kubevirt-operator created
clusterrolebinding.rbac.authorization.k8s.io/kubevirt-operator created
deployment.apps/virt-operator created
kubevirt.kubevirt.io/kubevirt created

Confirm Kubevirt is running on the cluster successfully:

$ kubectl get pods -n kubevirt
NAME READY STATUS RESTARTS AGE
virt-operator-844b6fc49b-bjmg4 1/1 Running 0 4m8s
virt-operator-844b6fc49b-hs9h8 1/1 Running 0 4m8s
virt-api-647cbf9699-dbzkr 1/1 Running 0 3m19s
virt-api-647cbf9699-qgmcw 1/1 Running 0 3m19s
virt-controller-68fb46d4cd-wd6l4 1/1 Running 0 2m52s
virt-controller-68fb46d4cd-chjvz 1/1 Running 0 2m52s
virt-handler-fch8j 1/1 Running 0 2m52s
virt-handler-lnpdk 1/1 Running 0 2m52s
virt-handler-gjpph 1/1 Running 0 2m52s
virt-handler-szwnx 1/1 Running 0 2m52s
virt-handler-fcrzt 1/1 Running 0 2m52s

Enable feature gate

Let's create and apply the KubeVirt ConfigMap in one command:

kubectl apply -f - <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
 name: kubevirt-config
 namespace: kubevirt
 labels:
 kubevirt.io: ""
data:
 feature-gates: "LiveMigration"
EOF
configmap/kubevirt-config created

VM creation

We will now start our Virtual machine running on our cluster! The following commands will in turn create our virtual machine, start the instance, show the state, and then expose a SSH connection through port-forwarding.

$ kubectl apply -f https://kubevirt.io/labs/manifests/vm.yaml
virtualmachine.kubevirt.io/testvm created

$ virtctl start testvm
VM testvm was scheduled to start

$ kubectl get vmi
NAME AGE PHASE IP NODENAME READY
testvm 28s Running 10.42.2.6 k3s-kubevirt-demo-828552e1-node-pool-01f6 True

$ kubectl get vm 
NAME AGE STATUS READY
testvm 35s Running True

$ kubectl get pods
NAME READY STATUS RESTARTS AGE
virt-launcher-testvm-x7h8k 2/2 Running 0 5m30s

$ virtctl expose vmi testvm - name=testvm-ssh - port=22 - type=NodePort
Service testvm-ssh successfully exposed for vmi testvm

$ virtctl expose vmi testvm - name=testvm-http - port=8080 - type=NodePort
Service testvm-http successfully exposed for vmi testvm

$ virtctl console testvm
while true; do ( echo "HTTP/1.0 200 Ok"; echo; echo "Migration test" ) | nc -l -p 8080; done

Here is the YAML file that was applied above as the vm.yaml file in case you're interested:

apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
 name: testvm
spec:
 running: false
 template:
 metadata:
 labels:
 kubevirt.io/size: small
 kubevirt.io/domain: testvm
 spec:
 domain:
 devices:
 disks:
 - name: containerdisk
 disk:
 bus: virtio
 - name: cloudinitdisk
 disk:
 bus: virtio
 interfaces:
 - name: default
 masquerade: {}
 resources:
 requests:
 memory: 64M
 networks:
 - name: default
 pod: {}
 volumes:
 - name: containerdisk
 containerDisk:
 image: quay.io/kubevirt/cirros-container-disk-demo
 - name: cloudinitdisk
 cloudInitNoCloud:
 userDataBase64: SGkuXG4=

You can see from the status output commands above that the VMI got scheduled on node k3s-kubevirt-demo-828552e1-node-pool-01f6. Let's try to hit the HTTP endpoint we exposed:

$ export PORT=31630
$ export IP=74.220.21.185 # replace with the public IP of your cluster
$ curl ${IP}:${PORT}
Migration test

$ virtctl migrate testvm
VM testvm was scheduled to migrate

You can check the virtualinstanceinstancemigration yaml file spec section where the virtual machine instance is specified.

spec:
 vmiName: testvm

As the migration was scheduled, we can see if it has worked:

$ kubectl get vmi
NAME AGE PHASE IP NODENAME READY
testvm 11m Running 10.42.1.8 k3s-kubevirt-demo-828552e1-node-pool-e483 True

you can see that in this case, the node has changed to k3s-kubevirt-demo-828552e1-node-pool-e483.

Wrapping up

KubeVirts let you run VM and container workloads side by side and use the same powers of Kubernetes with the same tooling. The VM migration example above was intended to show how the VM gets spun up, and how the VM flow actually works on Civo Kubernetes. Effectively, the migration of an entire virtual machine from one node to another is as smooth as pod scheduling on a node.

Let us know on Twitter @Civocloud and SaiyamPathak if you give KubeVirt a go on Civo Kubernetes! What use cases can you imagine for running VMs and containers alongside each other in the same cluster?