Kubestr is a collection of tools to discover, validate and evaluate your kubernetes storage options.
Kubestr was introduced recently to solve the problem of choosing the right storage solution for your kubernetes cluster. Running workloads that need persistence storage is not new and the use of it is ever increasing.
There are different persistence storage solutions based on CSI that have grown over time, with CSI different storage vendors have created the drivers that are used by the community.
Below is the image from CNCF Landscape showing the number of cloud-native storage options available at the time of writing.
So Kubestr aims to solve the following challenges: - Identify the various storage options present in a cluster. - Validate if the storage options are configured correctly. - Evaluate the storage using common benchmarking tools like FIO.
When you pick the storage option for your cluster it can be difficult to choose from so many vendors, whilst also benchmarking them to see if they are actually good for your use case.
It is important to check if the storage solution is configured correctly, which is where Kubestr can help. In this post we will discover what Kubestr brings to the table and explore the Longhorn storage solution.
Step 1: Create Civo Kubernetes cluster with Longhorn installed
Naturally, we'll use Civo Kubernetes, which is based on K3s, to experiment with this quickly. If you don’t yet have an account, sign up here. You could also use any other Kubernetes cluster you have access to.
Create a new cluster from the UI (you can also use Civo CLI), then install Longhorn from the Marketplace.
Once ready you should see the cluster with ready nodes.
Make sure you have kubectl installed, and the
kubeconfig file for your cluster downloaded.
kubectl get nodes NAME STATUS ROLES AGE VERSION k3s-sotrage-longhorn-819c2694-master-4135 Ready control-plane,master 6h30m v1.20.2+k3s1 k3s-sotrage-longhorn-819c2694-node-ad87 Ready <none> 6h30m v1.20.2+k3s1 k3s-sotrage-longhorn-819c2694-node-ee0f Ready <none> 6h30m v1.20.2+k3s1
Alternatively you can install Longhorn via a kubectl command as well:
kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.0.0/deploy/longhorn.yaml
Step 2: Install Kubestr locally
Here we will install Kubestr on a Mac by running
curl -LO https://github.com/kastenhq/kubestr/releases/download/v0.4.13/kubestr-v0.4.13-darwin-amd64.tar.gz
tar -xvf kubestr-v0.4.13-darwin-amd64.tar.gz x LICENSE x README.md x kubestr
Step 3: Experimenting Kubernetes Storage with Kubestr
Check if Kubestr is running
$ kubestr -h kubestr is a tool that will scan your k8s cluster and validate that the storage systems in place as well as run performance tests. Usage: kubestr [flags] kubestr [command] Available Commands: csicheck Runs the CSI snapshot restore check fio Runs an fio test help Help about any command Flags: -h, --help help for kubestr -o, --output string Options(json)
Storage solutions enabled on the cluster
Let's check the storage options enabled on the cluster by running the
In our case we have local-path provisioner that comes pre-installed for K3s and the other one is Longhorn that we installed.
Checking the Snapshot backup/restore functionality
This is a really handy feature that can help you to find out whether the Backup/restore will work for your persistent volume
To enable this you need to deploy the snapshot CRD's, snapshot controller and setup a backup target in Longhorn.
Install the Snapshot Beta CRDs
Download the files from https://github.com/kubernetes-csi/external-snapshotter/tree/master/client/config/crd
kubectl create -f client/config/crd and it should get applied to your cluster, which you can see by running:
kubectl get crd | grep snapshot volumesnapshotclasses.snapshot.storage.k8s.io 2021-03-31T14:12:57Z volumesnapshotcontents.snapshot.storage.k8s.io 2021-03-31T14:13:22Z volumesnapshots.snapshot.storage.k8s.io 2021-03-31T14:13:48Z
Install the Common Snapshot Controller
Download the files from https://github.com/kubernetes-csi/external-snapshotter/tree/master/deploy/kubernetes/snapshot-controller
Update the namespace to an appropriate value for your environment (e.g. kube-system)
kubectl create -f deploy/kubernetes/snapshot-controller.
kubectl get pods -n kube-system | grep snapshot snapshot-controller-9f68fdd9-ghpk7 1/1 Running 0 6m14s snapshot-controller-9f68fdd9-9rrww 1/1 Running 0 6m14s
Now create the CR by creating a yaml file with the below details:
kind: VolumeSnapshotClass apiVersion: snapshot.storage.k8s.io/v1beta1 metadata: name: longhorn driver: driver.longhorn.io deletionPolicy: Delete
Now when you run kubestr you can see the snapshot appear
Setting the Backup target in Longhorn, we will install
minio to the cluster and use that as the target.
kubectl create -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/minio-backupstore.yaml secret/minio-secret created secret/minio-secret created pod/longhorn-test-minio created service/minio-service created kubectl get secrets| grep minio NAME TYPE DATA AGE minio-secret Opaque 5 4m6s
Open the Longhorn-frontend and set as below in the in the settings page
Let's check if the snapshot and restore is working with Longhorn
./kubestr csicheck -s longhorn -v longhorn
Above command does the following
Output from the command will look something like:
Creating application -> Created pod (kubestr-csi-original-pod6qkj8) and pvc (kubestr-csi-original-pvcphtb2) Taking a snapshot -> Created snapshot (kubestr-snapshot-20210402193113) Restoring application -> Restored pod (kubestr-csi-cloned-podckmtt) and pvc (kubestr-csi-cloned-pvcnstz7) Cleaning up resources CSI checker test: CSI application successfully snapshotted and restored. - OK
This would help to check if the storage solution is implemented correctly and the snapshot/backup and restore will work.
Storage Performance evaluation
This feature for Kubestr let you test the performance of the storage by using the standard fio, you can also use your own fio configs. Let's test it against our cluster
./kubestr fio --help Run an fio test Usage: kubestr fio [flags] Flags: -f, --fiofile string The path to a an fio config file. -h, --help help for fio -i, --image string The container image used to create a pod. -n, --namespace string The namespace used to run FIO. (default "default") -z, --size string The size of the volume used to run FIO. (default "100Gi") -s, --storageclass string The name of a Storageclass. (Required) -t, --testname string The Name of a predefined kubestr fio test. Options(default-fio) Global Flags: -o, --output string Options(json)
Let's run the test with 10G size
./kubestr fio -s longhorn -z 10G
Output of the command
PVC created kubestr-fio-pvc-rs8vw Pod created kubestr-fio-pod-9llqr Running FIO test (default-fio) on StorageClass (longhorn) with a PVC of Size (10G) Elapsed time- 3m32.957323713s FIO test results: FIO version - fio-3.20 Global options - ioengine=libaio verify=0 direct=1 gtod_reduce=1 JobName: read_iops blocksize=4K filesize=2G iodepth=64 rw=randread read: IOPS=17.528746 BW(KiB/s)=79 iops: min=4 max=189 avg=39.736843 bw(KiB/s): min=16 max=759 avg=159.894730 JobName: write_iops blocksize=4K filesize=2G iodepth=64 rw=randwrite write: IOPS=9.270335 BW(KiB/s)=46 iops: min=2 max=103 avg=41.090908 bw(KiB/s): min=8 max=415 avg=164.909088 JobName: read_bw blocksize=128K filesize=2G iodepth=64 rw=randread read: IOPS=22.611294 BW(KiB/s)=3228 iops: min=3 max=238 avg=50.349998 bw(KiB/s): min=510 max=30464 avg=6484.799805 JobName: write_bw blocksize=128k filesize=2G iodepth=64 rw=randwrite write: IOPS=11.329676 BW(KiB/s)=1809 iops: min=1 max=198 avg=50.400002 bw(KiB/s): min=255 max=25344 avg=6501.600098 Disk stats (read/write): sda: ios=1206/608 merge=6/46 ticks=3526454/3399757 in_queue=6922544, util=97.771729% - OK
Kubestr is a simple lightweight tool to evaluate storage options within your cluster. You can run it across multiple clusters by changing the Kubeconfig and compare the performance across cluster, clouds and storage options.