Civo Kubernetes is powered by K3s (a lightweight Kubernetes distribution). As part of our managed Kubernetes service, we have developed our own in house custom Container Storage Interface (CSI) driver, so you can get a new default
storage class when you create a Civo Kubernetes cluster that you can use for your persistent workloads directly. In this tutorial, I will quickly show you how you can use a Civo Volume as persistent storage.
To follow along, you will need a Civo account, so if you have not signed up yet, create one now. You will also need kubectl
installed for your operating system.
Step 1: Create a Civo Kubernetes cluster
You can create the cluster from the UI or from the Civo CLI. Check out our guide for the same.
Step 2: Create a Persistent Volume Claim (PVC) on our Cluster
The cluster created will have civo-volume
as the default storage class, which you can confirm by viewing the storageclass
resources on your cluster:
$ kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local-path rancher.io/local-path Delete WaitForFirstConsumer false 10m
civo-volume (default) csi.civo.com Delete Immediate false 10m
Let's create a PVC that will automatically trigger a PersistentVolume (PV) creation based on the specification. Save the following snippet as pvc.yaml
in your current directory:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: civo-volume-test
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
Then, apply the persistent volume claim configuration to your cluster:
$ kubectl create -f pvc.yaml
persistentvolumeclaim/civo-volume-test created
You can verify that this works by running checks for PersistentVolume and PersistentVolumeClaim resources:
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-11509930-bf05-49ec-8814-62744e4606c4 3Gi RWO Delete Bound default/civo-volume-test civo-volume 2s
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
civo-volume-test Bound pvc-11509930-bf05-49ec-8814-62744e4606c4 3Gi RWO civo-volume 13m
Step 3: Create a pod to use a persistent volume
Let's create a pod to use the volume just created with the following pod.yaml file, again saved in your current directory:
apiVersion: v1
kind: Pod
metadata:
name: civo-vol-test-pod
spec:
volumes:
- name: civo-vol
persistentVolumeClaim:
claimName: civo-volume-test
containers:
- name: civo-vol-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: civo-vol
And let's apply it to our cluster:
$ kubectl create -f pod.yaml
pod/civo-vol-test-pod created
When we check the status, it should appear as running and ready:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
civo-vol-test-pod 1/1 Running 0 54s
Step 4: Cordon the node & delete the pod
Now we will cordon the node and delete the pod we just created. Once we create the pod again, it will spin up on a different node correctly.
You will need to find the node the pod is running on to cordon the correct node. The easiest way to do this is by running kubectl get pods -o wide
to identify the node. Take that named node and cordon it off from further scheduling:
$ kubectl cordon k3s-civo-vol-75499ca3-node-pool-a544
node/k3s-civo-vol-75499ca3-node-pool-a544 cordoned
Then delete the pod:
$ kubectl delete pod civo-vol-test-pod --force
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "civo-vol-test-pod" force deleted
When this deletion is complete, we will re-create the pod. As we cordoned off the original node, it will be created on another one in the cluster. However, as it is set to use the persistent volume we defined earlier, it should make no difference.
Re-create the pod on your cluster:
$ kubectl create -f pod.yaml
pod/civo-vol-test-pod created
Verify that it's running, and the node:
$ kubectl get pod -o=custom-columns=NAME:.metadata.name,STATUS:.status.phase,NODE:.spec.nodeName
NAME STATUS NODE
civo-vol-test-pod Running k3s-civo-vol-75499ca3-node-pool-a4e8
Also, if you check the events (kubectl get events
) for the pod you will see that it is attached to the same PVC we defined earlier.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m20s Successfully assigned default/civo-vol-test-pod to k3s-civo-vol-75499ca3-node-pool-a4e8
Normal SuccessfulAttachVolume 3m7s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-11509930-bf05-49ec-8814-62744e4606c4"
Wrapping up
You can use Civo Volumes in your Civo Kubernetes clusters in order to run your stateful workloads.
The civo-volume
storage class is the default storage class for clusters on our managed service. The back-end of the storage class runs with DataCore Bolt.