Create a dynamic volume provisioning of PersistentVolume and PersistentVolumeClaim

Saiyam Pathak avatar
By Saiyam Pathak
Director of Technical Evangelism

Description

Identify the fundamentals of creating a dynamic volume provisioning of PersistentVolume and PersistentVolumeClaim with the help of StorageClass and StatefulSet.


Transcription

Introduction to dynamic volume provisioning

Hi, in this video, we'll be talking about dynamic volume provisioning. This means, whenever a developer creates a PVC, which is a PersistentVolumeClaim, then a PV, which is PersistentVolume, will get automatically created. For this, there has to be storage classes with specific provisioners to be used so that whenever a PVC is created, the storage class will automatically create a PV with the respective provisioner defined.

Creating a storage class

In this particular demo, we'll be using rancher/local-path-provisioner/. Let's first apply that. To apply rancher/local-path-provisioner/, we must use kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.22/deploy/local-path-storage.yaml. It will create all the necessary resources, namespace, service account, the roles, deployment, and the storage class as well. If you use kubectl get sc, you will see that it has created a local path, and the provisioner is automatically given as rancher.io/local-path from the yaml file that we have just applied to the cluster.

Getting started with StatefulSet

Now, to use this or how a developer can use the local-path provisioner to request the PVC, the PV should automatically get generated. So this is a StatefulSet with the name local-test, and it has three replicas, and it is a busy box container with a command. And this is the critical piece wherein a StatefulSet is defined volumeClaimTemplates. Whenever this StatefulSet is applied to the cluster, it will create a volume claim, PVC, for the first replica. And as soon as the PVC is created and because we have mentioned StorageClassName as local-path, rancher/local-path-provisioner/ will automatically create a PV on the node. This process will keep on going till we have three replicas. Let's create this StatefulSet with kubectl create -f statefulset.yaml. StatefulSet is created. Now, let's verify the creation with kubectl get pv, pvc.

Creating a dynamic volume provisioning of PV and PVC

The PVCs will start creating one-by-one. One is created and bound already for test zero. Test 1 is also bounded. Now it will get created for the second one. Local-test-0, 1, 2, all three PVCs are created and the respective PVs as well, and they are in Bound status. We should have three replicas of the pods running, and they are running local-test-0, 1, and 2. And inside the pod, it is bound to /usr/test-pod. Let's go inside any of the pods using kubectl exec -it local-test-0 -- sh. Next, use cd /usr/test-pod. We go into the location and create a file using touch test. So we have created the file, let's exit. Now, let's see where the pod 0 is running using kubectl get pods -owide. This is running on kubeadm3, so node3. I already have node3 opened, and by default, the rancher/local-path-provisioner/ will go to /opt/local-path-provisioner/ directory. Here, we can see we have two pods directed to this node.

There are two directories created here, one for test-0 and one for test-1. So we'll go in test-0 with cd, and we will use ls. We can see the file is there. This particular PV that was created has a reclaim policy of delete. It means as soon as we delete the PVCs, the PV also gets deleted along with the data. Now, let's scale down the replica to two. We will use kubectl edit statefulset local-test, and let's reduce the replicas to 2, so it should delete the local-test-2. The pod is deleted, now, let's look at the PV and PVC through kubectl get pv, pvc. The PV and the PVC are separate from the pod life cycle, so the data is still there. What we'll do is we'll delete the PVC using kubectl delete persistentvolumeclaim/local-vol-local-test-2, and we can see the PV was released and then deleted. This is how the PV is deleted.

If we see the pods using kubectl get pods -owide, we will see that pod number 2 was running on this particular node which is node3. Now, if I use cd .. and go to the previous directory, and use ls, we can see the directory has also been removed automatically. That is how the PersistentVolume, PersistentVolumeClaim works as dynamic provisioning. In dynamic provisioning, a developer only has to define what volume is needed and the storage class with the defined provisioner will automatically create the PVs dynamically. That's it for this lecture. Thank you for watching. See you in the next one.

Civo course complete badge

You've successfully completed our course on Kubernetes volumes

We hope you enjoyed learning and encourage you to check out our other courses to further expand your knowledge on Kubernetes