Civo Kubernetes is powered by K3s (a lightweight Kubernetes distribution). As part of our managed Kubernetes service, we have developed our own in house custom Container Storage Interface (CSI) driver, so you can get a new
default storage class when you create a Civo Kubernetes cluster that you can use for your persistent workloads directly. In this tutorial, I will quickly show you how you can use a Civo Volume as persistent storage.
Step1: Create a Civo Kubernetes cluster
You can create the cluster from the UI or from the Civo CLI. For this let's create using the CLI.
$ civo k3s create civo-vol The cluster civo-vol (0138ab37-9bb5-484f-8a62-6f5a40399a7b) has been created
Above will create a 3 node cluster named
We will need to get the Kubeconfig for the cluster and save to our desired location. If you do not specify a path, it will save it to
$ civo k3s config civo-vol --save --local-path /Users/saiyampathak/civo/test/vol.config Access your cluster with: KUBECONFIG=/Users/saiyampathak/civo/test/vol.config kubectl get node
Let's make sure that
kubectl knows to use our cluster's configuration file:
$ export KUBECONFIG=/Users/saiyampathak/civo/test/vol.config $ kubectl get nodes NAME STATUS ROLES AGE VERSION k3s-civo-vol-75499ca3-node-pool-a4e8 Ready <none> 7m27s v1.20.2+k3s1 k3s-civo-vol-75499ca3-node-pool-f651 Ready <none> 7m21s v1.20.2+k3s1 k3s-civo-vol-75499ca3-node-pool-a544 Ready <none> 7m19s v1.20.2+k3s1
Step 2: Create a Persistent Volume Claim (PVC) on our Cluster
The cluster created will have
civo-volume as the default storage class, which you can confirm by viewing the
storageclass resources on your cluster:
$ kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE local-path rancher.io/local-path Delete WaitForFirstConsumer false 10m civo-volume (default) csi.civo.com Delete Immediate false 10m
Let's create a PVC that will automatically trigger a PersistentVolume (PV) creation based on the specification. Save the following snippet as
pvc.yaml in your current directory:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: civo-volume-test spec: accessModes: - ReadWriteOnce resources: requests: storage: 3Gi
Then, apply the persistent volume claim configuration to your cluster:
$ kubectl create -f pvc.yaml persistentvolumeclaim/civo-volume-test created
You can verify that this works by running checks for PersistentVolume and PersistentVolumeClaim resources:
$ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-11509930-bf05-49ec-8814-62744e4606c4 3Gi RWO Delete Bound default/civo-volume-test civo-volume 2s $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE civo-volume-test Bound pvc-11509930-bf05-49ec-8814-62744e4606c4 3Gi RWO civo-volume 13m
The back-end of the Civo persistent volume is powered by StorageOS.
Step3: Create a pod to use a persistent volume
Let's create a pod to use the volume just created with the following
pod.yaml file, again saved in your current directory:
apiVersion: v1 kind: Pod metadata: name: civo-vol-test-pod spec: volumes: - name: civo-vol persistentVolumeClaim: claimName: civo-volume-test containers: - name: civo-vol-container image: nginx ports: - containerPort: 80 name: "http-server" volumeMounts: - mountPath: "/usr/share/nginx/html" name: civo-vol
And let's apply it to our cluster:
$ kubectl create -f pod.yaml pod/civo-vol-test-pod created
When we check the status, it should appear as running and ready:
$ kubectl get pods NAME READY STATUS RESTARTS AGE civo-vol-test-pod 1/1 Running 0 54s
Step4: Cordon the node & delete the pod
Now we will cordon the node and delete the pod we just created. Once we create the pod again, it will spin up on a different node correctly.
You will need to find the node the pod is running on to cordon the correct node. The easiest way to do this is by running
kubectl get pods -o wide to identify the node. Take that named node and cordon it off from further scheduling:
$ kubectl cordon k3s-civo-vol-75499ca3-node-pool-a544 node/k3s-civo-vol-75499ca3-node-pool-a544 cordoned
Then delete the pod:
$ kubectl delete pod civo-vol-test-pod --force warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "civo-vol-test-pod" force deleted
When this deletion is complete, we will re-create the pod. As we cordoned off the original node, it will be created on another one in the cluster. However, as it is set to use the persistent volume we defined earlier, it should make no difference.
Re-create the pod on your cluster:
$ kubectl create -f pod.yaml pod/civo-vol-test-pod created
Verify that it's running, and the node:
$ kubectl get pod -o=custom-columns=NAME:.metadata.name,STATUS:.status.phase,NODE:.spec.nodeName NAME STATUS NODE civo-vol-test-pod Running k3s-civo-vol-75499ca3-node-pool-a4e8
Also, if you check the events (
kubectl get events) for the pod you will see that it is attached to the same PVC we defined earlier.
Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 3m20s Successfully assigned default/civo-vol-test-pod to k3s-civo-vol-75499ca3-node-pool-a4e8 Normal SuccessfulAttachVolume 3m7s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-11509930-bf05-49ec-8814-62744e4606c4"
You can use Civo Volumes in your Civo Kubernetes clusters in order to run your stateful workloads.
civo-volume storage class is the default storage class for clusters on our managed service. The back-end of the storage class runs with StorageOS.
If there is a feature you would love to see regarding Civo Volumes, persistent storage or anything else, head over to your account's suggestions page and see if it's already been requested or post one of your own!