Kubestr is a collection of tools to discover, validate and evaluate your kubernetes storage options.

Kubestr was introduced recently to solve the problem of choosing the right storage solution for your kubernetes cluster. Running workloads that need persistence storage is not new and the use of it is ever increasing.

There are different persistence storage solutions based on CSI that have grown over time, with CSI different storage vendors have created the drivers that are used by the community.

Below is the image from CNCF Landscape showing the number of cloud-native storage options available at the time of writing.

Cloud native storage

So Kubestr aims to solve the following challenges: - Identify the various storage options present in a cluster. - Validate if the storage options are configured correctly. - Evaluate the storage using common benchmarking tools like FIO.

When you pick the storage option for your cluster it can be difficult to choose from so many vendors, whilst also benchmarking them to see if they are actually good for your use case.

It is important to check if the storage solution is configured correctly, which is where Kubestr can help. In this post we will discover what Kubestr brings to the table and explore the Longhorn storage solution.

Step 1: Create Civo Kubernetes cluster with Longhorn installed

Naturally, we'll use Civo Kubernetes, which is based on K3s, to experiment with this quickly. If you don’t yet have an account, sign up here. You could also use any other Kubernetes cluster you have access to.

Create a new cluster from the UI (you can also use Civo CLI), then install Longhorn from the Marketplace.

Creating a cluster on Civo

Once ready you should see the cluster with ready nodes.

Cluster is ready

Make sure you have kubectl installed, and the kubeconfig file for your cluster downloaded.

kubectl get nodes
NAME                                        STATUS   ROLES                  AGE     VERSION
k3s-sotrage-longhorn-819c2694-master-4135   Ready    control-plane,master   6h30m   v1.20.2+k3s1
k3s-sotrage-longhorn-819c2694-node-ad87     Ready    <none>                 6h30m   v1.20.2+k3s1
k3s-sotrage-longhorn-819c2694-node-ee0f     Ready    <none>                 6h30m   v1.20.2+k3s1

Alternatively you can install Longhorn via a kubectl command as well:

kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.0.0/deploy/longhorn.yaml

Step 2: Install Kubestr locally

Here we will install Kubestr on a Mac by running:

$ curl -LO https://github.com/kastenhq/kubestr/releases/download/v0.4.41/kubestr_0.4.41_MacOS_arm64.tar.gz

then:

$ tar -xvf kubestr_0.4.41_MacOS_arm64.tar.gz
x LICENSE
x README.md
x kubestr

Make the kubestr executable:

$ chmod +x kubestr

Step 3: Experimenting Kubernetes Storage with Kubestr

Check if Kubestr is running

$ ./kubestr -h
kubestr is a tool that will scan your k8s cluster
     and validate that the storage systems in place as well as run
     performance tests.

Usage:
  kubestr [flags]
  kubestr [command]

Available Commands:
  blockmount  Checks if a storage class supports block volumes
  browse    Browse the contents of a CSI PVC via file browser
  completion  Generate the autocompletion script for the specified shell
  csicheck  Runs the CSI snapshot restore check
  fio       Runs an fio test
  help      Help about any command

Flags:
  -h, --help            help for kubestr
  -e, --outfile string   The file where test results will be written
  -o, --output string   Options(json)

Use "kubestr [command] --help" for more information about a command.

Storage solutions enabled on the cluster

Let's check the storage options enabled on the cluster by running the kubestr command

Kubestr in action

In our case we have local-path provisioner that comes pre-installed for K3s and the other one is Longhorn that we installed.

Checking the Snapshot backup/restore functionality

This is a really handy feature that can help you to find out whether the Backup/restore will work for your persistent volume.

To enable this you need to deploy the snapshot CRD's, snapshot controller and setup a backup target in Longhorn.

Install the Snapshot CRDs

We will install the Snapshot CRD using the files from this github repo folder: https://github.com/kubernetes-csi/external-snapshotter/tree/master/client/config/crd

We will use kustomize command to install it in our kubernetes cluster:

 $ kubectl kustomize https://github.com/kubernetes-csi/external-snapshotter/client/config/crd | kubectl apply -f - 

and it should get applied to your cluster, which you can see by running:

$ kubectl get crd | grep snapshot
snapshots.longhorn.io                           2023-08-03T13:39:10Z
volumesnapshotclasses.snapshot.storage.k8s.io   2023-08-03T15:06:45Z
volumesnapshotcontents.snapshot.storage.k8s.io   2023-08-03T15:06:46Z
volumesnapshots.snapshot.storage.k8s.io         2023-08-03T15:06:47Z

Install the Common Snapshot Controller

We will install the Snapshot Controller using the files from this github repo folder: https://github.com/kubernetes-csi/external-snapshotter/tree/master/deploy/kubernetes/snapshot-controller

Update the namespace to an appropriate value for your environment (e.g. kube-system) Run:

$ kubectl kustomize https://github.com/kubernetes-csi/external-snapshotter/deploy/kubernetes/snapshot-controller | kubectl apply -f -

serviceaccount/snapshot-controller created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
deployment.apps/snapshot-controller created

and it should get applied to your cluster, which you can see by running:

$ kubectl get pods -n kube-system | grep snapshot
snapshot-controller-85f68864bb-5btjw    1/1     Running   0         4m22s
snapshot-controller-85f68864bb-sthkm   1/1  Running   0         4m22s

Now create the CR by creating a yaml file with the below details:

/tmp/longhornsnapshot.yaml:

kind: VolumeSnapshotClass
apiVersion: snapshot.storage.k8s.io/v1
metadata:
  name: longhorn
driver: driver.longhorn.io
deletionPolicy: Delete

Apply the file in your cluster:

$ kubectl apply -f /tmp/longhornsnapshot.yaml

Now when you run kubestr you can see the snapshot appear

Snapshots in Kubestr

Setting the Backup target in Longhorn, we will install minio to the cluster and use that as the target.

$ kubectl create -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/minio-backupstore.yaml

secret/minio-secret created
secret/minio-secret created
pod/longhorn-test-minio created
service/minio-service created

$ kubectl get secrets| grep minio
NAME                  TYPE                                  DATA   AGE
minio-secret          Opaque                                5      4m6s

Open the Longhorn-frontend and set as below in the in the settings page

Longhorn settings

Let's check if the snapshot and restore is working with Longhorn

./kubestr csicheck -s longhorn -v longhorn

Above command does the following

Volume snapshotting process illustration

Longhorn backup details

Longhorn backup details 2

Longhorn backup status

Output from the command will look something like:

Creating application
  -> Created pod (kubestr-csi-original-pod86r75) and pvc (kubestr-csi-original-pvc8vhg7)
Taking a snapshot
  -> Created snapshot (kubestr-snapshot-20230804000023)
Restoring application
  -> Restored pod (kubestr-csi-cloned-podxhcnn) and pvc (kubestr-csi-cloned-pvcr8zms)
Cleaning up resources
CSI checker test:
  CSI application successfully snapshotted and restored.  -  OK

This would help to check if the storage solution is implemented correctly and the snapshot/backup and restore will work.

Storage Performance evaluation

This feature for Kubestr let you test the performance of the storage by using the standard fio, you can also use your own fio configs. Let's test it against our cluster

./kubestr fio --help
Run an fio test

Usage:
  kubestr fio [flags]

Flags:
  -f, --fiofile string              The path to a an fio config file.
  -h, --help                        help for fio
  -i, --image string                The container image used to create a pod.
  -n, --namespace string            The namespace used to run FIO. (default "default")
  -N, --nodeselector stringToString   Node selector applied to pod. (default [])
  -z, --size string                 The size of the volume used to run FIO. Note that the FIO job definition is not scaled accordingly. (default "100Gi")
  -s, --storageclass string         The name of a Storageclass. (Required)
  -t, --testname string             The Name of a predefined kubestr fio test. Options(default-fio)

Global Flags:
  -e, --outfile string   The file where test results will be written
  -o, --output string   Options(json)

Let's run the test with 10G size:

./kubestr fio -s longhorn -z 10G          

10Gi volume created

Output of the command

PVC created kubestr-fio-pvc-n92gm
Pod created kubestr-fio-pod-h988d
Running FIO test (default-fio) on StorageClass (longhorn) with a PVC of Size (10G)
Elapsed time- 35.302211875s
FIO test results:

FIO version - fio-3.34
Global options - ioengine=libaio verify=0 direct=1 gtod_reduce=1

JobName: read_iops
  blocksize=4K filesize=2G iodepth=64 rw=randread
read:
  IOPS=840.649536 BW(KiB/s)=3379
  iops: min=540 max=1030 avg=846.633362
  bw(KiB/s): min=2160 max=4120 avg=3386.833252

JobName: write_iops
  blocksize=4K filesize=2G iodepth=64 rw=randwrite
write:
  IOPS=401.440552 BW(KiB/s)=1622
  iops: min=223 max=546 avg=400.899994
  bw(KiB/s): min=894 max=2187 avg=1604.466675

JobName: read_bw
  blocksize=128K filesize=2G iodepth=64 rw=randread
read:
  IOPS=834.979797 BW(KiB/s)=107412
  iops: min=670 max=1103 avg=836.700012
  bw(KiB/s): min=85844 max=141206 avg=107126.734375

JobName: write_bw
  blocksize=128k filesize=2G iodepth=64 rw=randwrite
write:
  IOPS=427.818054 BW(KiB/s)=55292
  iops: min=334 max=595 avg=430.899994
  bw(KiB/s): min=42752 max=76239 avg=55177.964844

Disk stats (read/write):
  sda: ios=28871/14323 merge=155/367 ticks=1780648/1965618 in_queue=3746266, util=99.831383%
  -  OK

Wrapping Up

Kubestr is a simple lightweight tool to evaluate storage options within your cluster. You can run it across multiple clusters by changing the Kubeconfig and compare the performance across cluster, clouds and storage options.

Let us know on Twitter @Civocloud and @SaiyamPathak if you try Kubestr to evaluate Longhorn on Civo Kubernetes!