Kube-ception: Kubernetes within Kubernetes within Kubernetes using Harvester
Harvester is an Open source HCI (Hyper-converged software) that enables to create and manage virtual machines using the power of Kubernetes.
Written by
Head of Developer Relations @ vCluster
Written by
Head of Developer Relations @ vCluster
Harvester is an open source HCI (Hyper-converged software) that enables the creation and management of virtual machines (VM) using the power of Kubernetes. It is based on Kubevirt to provide the virtualization layer for the VMs, and Longhorn as a persistent storage solution for the machines. One of the goals for Harvester is to make it easy for the user to run and manage VMs without knowledge of the tech stack (Kubernetes, Kubevirt, etc.) behind it.
In this tutorial, we will try to understand:
- Harvester Architecture
- What does Harvester offer?
- How to install Harvester on Civo Kubernetes
- Launching a VM using Harvester
- Install K3s on a newly launched VM created via Harvester to experience "Kube-ception"
For more context, listen to this stream by Sheng from SUSE to learn more about Harvester and its bare metal setup.

Harvester architecture
Major componenets of Harvester are Kubevirt, Longhorn, and Multus for putting together VLANs (Virtual Local Area Network) and the management network.

As you can see this is Hyperconverged Infrastructure mode architecture, where you have
- Bare-metal nodes
- K3OS running on top of the nodes (note that this might change when Harvester goes into General Availability)
- Longhorn and Kubevirt
- VM creation
- VMs are attached to a VLAN and management network, or just the management network
Harvester features and use-cases

Here are some notable features of the Harvester (from the docs).
- VM lifecycle management including SSH-Key injection, Cloud-init and, graphic and serial port console
- VM live migration support
- VM backup and restore support
- Distributed block storage
- Multiple NICs in the VM connecting to the management network or VLANs
- Virtual Machine and cloud-init templates
- Built-in Rancher integration and the Harvester node driver
- PXE/iPXE boot support
Raw block device support
With the release of version 0.2.0, Harvester has already shown impressive improvements, like the removal of a dependency on Minio dependency, as Longhorn is used to manage the VM image using the Backing Image feature. Now there is less time in importing the image and more time on the first VM boot from that image as it gets stored in Longhorn.
VM live migration and backup support
VM Live Migration and Backup support are also key features in v0.2.0. You can now migrate a VM from one node to another if needed.

You also have a feature to take backups of virtual machines to a target outside of the cluster. In order to use this, you need to add a backup target as S3 compatible endpoint or NFS server.

Rancher integration and PXE/iPXE boot support
With Harvester v0.2.0 you get PXE boot support to provision bare metal nodes with Operating Systems in an automated way, and Rancher integration to create Kubernetes clusters on top of your bare metal nodes.
Harvester installation
Harvester can be installed in two modes: ISO mode and App mode
- The HCI mode can be used to install Harvester directly on bare-metal to create a Harvester cluster. It will result in this layer of architecture:

- App mode is used to install Harvester as a Helm chart onto an existing Kubernetes cluster
Below, we will go through the App Mode installation method, as we will be installing Harvester onto a Civo Kubernetes cluster.
Creating a Civo Kubernetes cluster
Create a new cluster from the Kubernetes menu on Civo (you can also use Civo CLI). Once ready, you should see the cluster with ready nodes.
I created a big Extra Large Nodes cluster:

Make sure you have both Helm and kubectl installed on your machine, and the KUBECONFIG file for your cluster downloaded so that you can run kubectl get nodes and get details of the cluster you just created:
$ kubectl get nodesNAME STATUS ROLES AGE VERSIONk3s-harvester-a2481fd0-node-pool-8553 Ready <none> 2m34s v1.20.2+k3s1k3s-harvester-a2481fd0-node-pool-6a18 Ready <none> 2m34s v1.20.2+k3s1k3s-harvester-a2481fd0-node-pool-6cd6 Ready <none> 2m34s v1.20.2+k3s1k3s-harvester-a2481fd0-node-pool-e8be Ready <none> 2m33s v1.20.2+k3s1k3s-harvester-a2481fd0-node-pool-65d3 Ready <none> 2m33s v1.20.2+k3s1k3s-harvester-a2481fd0-node-pool-8b06 Ready <none> 2m33s v1.20.2+k3s1
Install Harvester via Helm chart
Create a namespace for Harvester:
$ kubectl create ns harvester-systemnamespace/harvester-system created
Install Harvester:
$ helm install harvester harvester --namespace harvester-system --set longhorn.enabled=true,minio.persistence.storageClass=longhorn,service.harvester.type=NodePortNAME: harvesterLAST DEPLOYED: Tue May 4 21:27:14 2021NAMESPACE: harvester-systemSTATUS: deployedREVISION: 1TEST SUITE: NoneNOTES:The Harvester has been installed into "harvester-system" namespace with "harvester" as the release name.- [x] KubeVirt Operator- [x] KubeVirt Resource named "kubevirt"- [x] KubeVirt Containerized Data Importer Operator- [x] KubeVirt Containerized Data Importer(CDI) Resource named "cdi"- [x] Minio- [x] Longhorn- [ ] Multus-cniPlease make sure there is a default StorageClass in the Kubernetes cluster.To learn more about the release, try:$ helm status harvester$ helm get all harvester
$ helm history harvester -n harvester-systemREVISION UPDATED STATUS CHART APP VERSION DESCRIPTION1 Fri May 7 11:17:09 2021 deployed harvester-0.2.0 v0.2.0 Install complete
In the harvester-system namespace, a bunch of operators will be deployed:
$ kubectl get deploy -n harvester-systemNAME READY UP-TO-DATE AVAILABLE AGEharvester-network-controller-manager 2/2 2 2 16dharvester 3/3 3 3 16dvirt-operator 1/1 1 1 16dcdi-apiserver 1/1 1 1 16dcdi-uploadproxy 1/1 1 1 16dvirt-api 2/2 2 2 16dcdi-operator 1/1 1 1 16dcdi-deployment 1/1 1 1 16dvirt-controller 2/2 2 2 16d
You can access the harvester UI via NodePort or you can also create an ingress object for the same:
$ kubectl get svc -n harvester-systemNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEharvester-minio ClusterIP 10.43.254.67 <none> 9000/TCP 16dharvester NodePort 10.43.18.240 <none> 8443:31594/TCP 16dcdi-api ClusterIP 10.43.81.71 <none> 443/TCP 16dcdi-prometheus-metrics ClusterIP 10.43.44.66 <none> 443/TCP 16dcdi-uploadproxy ClusterIP 10.43.63.49 <none> 443/TCP 16dkubevirt-prometheus-metrics ClusterIP 10.43.135.213 <none> 443/TCP 16dvirt-api ClusterIP 10.43.206.174 <none> 443/TCP 16dkubevirt-operator-webhook ClusterIP 10.43.18.154 <none> 443/TCP 16d
Launching a VM from Harvester UI
First, log in to the Harvester UI with default credentials (admin/password)

You should see Harvester 0.2.0 with default view:

We will need to add an image from which the VMs can be created.
Let's add an Ubuntu image using https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.img in the "Image: Create" menu.

Now this image can be used to create the Virtual Machine!
Create VM
Create the virtual machine from the Harvester user interface under "Virtual Machine: Create"

After selecting CPU, Memory, Image, put in the cloud config in the Advanced Options screen:

Once you click to create the VM will be created successfully:

Kube-ception mode
Now that the VM is created successfully you can login to the console directly from the UI. Once you select the console option from the VM you will be prompted with login and password.
Use credentials ubuntu/password to log in (which we set in the Cloud Config above):

Install k3s
Install k3s on the newly created VM using the curl command
curl -sfL https://get.k3s.io | sh -
You can see the beautiful inception of Kubernetes within Kubernetes within Kubernetes:

Wrapping up
Harvester looks to be promising open-source hyperconverged infrastructure (HCI) software with the combination of technologies that it uses and the number of features that are already built. This will be very well adopted and used by the community in my opinion.
Let us know on Twitter @Civocloud and @SaiyamPathak if you try Harvester out on Civo Kubernetes or even bare metal servers.

Head of Developer Relations @ vCluster
Saiyam Pathak is Head of Developer Relations at vCluster and a prominent advocate in the cloud-native and Kubernetes community. He is also the founder of Kubesimplify, a platform dedicated to simplifying Kubernetes and cloud-native technologies through educational content.
Saiyam has previously worked at organizations including Civo, Walmart Labs, Oracle, and HP, gaining experience across machine learning platforms, multi-cloud infrastructure, and managed Kubernetes services. He actively contributes to the community through technical content, meetups, and open-source initiatives.
Share this article
Further Reading
8 November 2021