Harvester is an Open source HCI (Hyper-converged software) that enables the creation and management of virtual machines (VM) using the power of Kubernetes. It is based on Kubevirt to provide the virtualization layer for the VMs, and Longhorn as a persistent storage solution for the machines. One of the goals for Harvester is to make it easy for the user to run and manage VMs without knowledge of the tech stack (Kubernetes, Kubevirt, etc.) behind it.
In this tutorial, we will try to understand:
- Harvester Architecture
- What does Harvester offer?
- How to install Harvester on Civo Kubernetes
- Launching a VM using Harvester
- Install K3s on a newly launched VM created via Harvester to experience "Kube-ception"
For more context, listen to this stream by Sheng from SUSE to learn more about Harvester and its bare metal setup.
Major componenets of Harvester are Kubevirt, Longhorn, and Multus for putting together VLANs (Virtual Local Area Network) and the management network.
As you can see this is Hyperconverged Infrastructure mode architecture, where you have
- Bare-metal nodes
- K3OS running on top of the nodes (note that this might change when Harvester goes into General Availability)
- Longhorn and Kubevirt
- VM creation
- VMs are attached to a VLAN and management network, or just the management network
Here are some notable features of the Harvester (from the docs).
- VM lifecycle management including SSH-Key injection, Cloud-init and, graphic and serial port console
- VM live migration support
- VM backup and restore support
- Distributed block storage
- Multiple NICs in the VM connecting to the management network or VLANs
- Virtual Machine and cloud-init templates
- Built-in Rancher integration and the Harvester node driver
- PXE/iPXE boot support
Raw Block Device Support
With the release of version 0.2.0, Harvester has already shown impressive improvements, like the removal of a dependency on Minio dependency, as Longhorn is used to manage the VM image using the Backing Image feature. Now there is less time in importing the image and more time on the first VM boot from that image as it gets stored in Longhorn.
VM live migration and backup support
VM Live Migration and Backup support are also key features in v0.2.0. You can now migrate a VM from one node to another if needed.
You also have a feature to take backups of virtual machines to a target outside of the cluster. In order to use this, you need to add a backup target as S3 compatible endpoint or NFS server.
Rancher integration and PXE/iPXE boot support
With Harvester v0.2.0 you get PXE boot support to provision bare metal nodes with Operating Systems in an automated way, and Rancher integration to create Kubernetes clusters on top of your bare metal nodes.
Harvester can be installed in two modes: ISO mode and App mode
- The HCI mode can be used to install Harvester directly on bare-metal to create a Harvester cluster. It will result in this layer of architecture:
- App mode is used to install Harvester as a Helm chart onto an existing Kubernetes cluster
Below we will go through the
App Mode installation method, as we will be installing Harvester onto a Civo Kubernetes cluster.
Create Civo Kubernetes cluster
Create a new cluster from the Kubernetes menu on Civo (you can also use Civo CLI). Once ready, you should see the cluster with ready nodes.
I created a big Extra Large Nodes cluster:
Make sure you have both Helm and kubectl installed on your machine, and the KUBECONFIG file for your cluster downloaded so that you can run
kubectl get nodes and get details of the cluster you just created:
$ kubectl get nodes NAME STATUS ROLES AGE VERSION k3s-harvester-a2481fd0-node-pool-8553 Ready <none> 2m34s v1.20.2+k3s1 k3s-harvester-a2481fd0-node-pool-6a18 Ready <none> 2m34s v1.20.2+k3s1 k3s-harvester-a2481fd0-node-pool-6cd6 Ready <none> 2m34s v1.20.2+k3s1 k3s-harvester-a2481fd0-node-pool-e8be Ready <none> 2m33s v1.20.2+k3s1 k3s-harvester-a2481fd0-node-pool-65d3 Ready <none> 2m33s v1.20.2+k3s1 k3s-harvester-a2481fd0-node-pool-8b06 Ready <none> 2m33s v1.20.2+k3s1
Install Harvester via helm chart
Create a namespace for Harvester:
$ kubectl create ns harvester-system namespace/harvester-system created
$ helm install harvester harvester --namespace harvester-system --set longhorn.enabled=true,minio.persistence.storageClass=longhorn,service.harvester.type=NodePort NAME: harvester LAST DEPLOYED: Tue May 4 21:27:14 2021 NAMESPACE: harvester-system STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: The Harvester has been installed into "harvester-system" namespace with "harvester" as the release name. - [x] KubeVirt Operator - [x] KubeVirt Resource named "kubevirt" - [x] KubeVirt Containerized Data Importer Operator - [x] KubeVirt Containerized Data Importer(CDI) Resource named "cdi" - [x] Minio - [x] Longhorn - [ ] Multus-cni Please make sure there is a default StorageClass in the Kubernetes cluster. To learn more about the release, try: $ helm status harvester $ helm get all harvester
$ helm history harvester -n harvester-system REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION 1 Fri May 7 11:17:09 2021 deployed harvester-0.2.0 v0.2.0 Install complete
harvester-system namespace, a bunch of operators will be deployed:
$ kubectl get deploy -n harvester-system NAME READY UP-TO-DATE AVAILABLE AGE harvester-network-controller-manager 2/2 2 2 16d harvester 3/3 3 3 16d virt-operator 1/1 1 1 16d cdi-apiserver 1/1 1 1 16d cdi-uploadproxy 1/1 1 1 16d virt-api 2/2 2 2 16d cdi-operator 1/1 1 1 16d cdi-deployment 1/1 1 1 16d virt-controller 2/2 2 2 16d
You can access the harvester UI via NodePort or you can also create an ingress object for the same.
$ kubectl get svc -n harvester-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE harvester-minio ClusterIP 10.43.254.67 <none> 9000/TCP 16d harvester NodePort 10.43.18.240 <none> 8443:31594/TCP 16d cdi-api ClusterIP 10.43.81.71 <none> 443/TCP 16d cdi-prometheus-metrics ClusterIP 10.43.44.66 <none> 443/TCP 16d cdi-uploadproxy ClusterIP 10.43.63.49 <none> 443/TCP 16d kubevirt-prometheus-metrics ClusterIP 10.43.135.213 <none> 443/TCP 16d virt-api ClusterIP 10.43.206.174 <none> 443/TCP 16d kubevirt-operator-webhook ClusterIP 10.43.18.154 <none> 443/TCP 16d
Launching a VM from Harvester UI
First, log in to the Harvester UI with default credentials (
You should see Harvester 0.2.0 with default view:
We will need to add an image from which the VMs can be created.
Let's add an Ubuntu image using
https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.img in the "Image: Create" menu.
Now this image can be used to create the Virtual Machine!
Create the virtual machine from the Harvester user interface under "Virtual Machine: Create"
After selecting CPU, Memory, Image, put in the cloud config in the
Advanced Options screen
Once you click to create the VM will be created successfully:
Now that the VM is created successfully you can login to the console directly from the UI. Once you select the console option from the VM you will be prompted with login and password.
ubuntu/password to log in (which we set in the Cloud Config above)
Install K3s on the newly created VM using the
curl -sfL https://get.k3s.io | sh -
You can see the beautiful inception of Kubernetes within Kubernetes within Kubernetes:
Harvester looks to be promising open-source hyperconverged infrastructure (HCI) software with the combination of technologies that it uses and the number of features that are already built. This will be very well adopted and used by the community in my opinion.
Let us know on Twitter @Civocloud and @SaiyamPathak if you try Harvester out on Civo Kubernetes or even bare metal servers.