How to create a local Kubernetes volume

Saiyam Pathak avatar
By Saiyam Pathak
Director of Technical Evangelism


Uncover the basics of local volume and creating a StorageClass, PersistentVolume and PersistentVolumeClaim inside a pod.



In this video, we'll discuss local volumes and see how we can create a storage class, PV, PVC, and use that inside a pod. Now, let's see the StorageClass. In this scenario, the StorageClass is named as local storage, and the provisioner is no-provisioner. The local volumes do not support dynamic provisioning, so the PV and the PVC have to be created manually. And the VolumeBindingMode is WaitForTheFirstConsumer. Now, this delays the binding of a PVC to a PV, which helps the scheduler schedule the pod in the right way and follow all the constraints.

Creating Local Volumes in Kubernetes with PV and PVC

So, we have the StorageClass, let's first create the StorageClass with kubectl create -f sc.yaml. Then, we can verify it using kubectl get sc, the StorageClass is created. Now, let's see what we have in PV. This is a simple PersistentVolume and in the spec section, we have defined the StorageClass name as local-storage which we just created. The capacity, accessMode, ReadWriteOnce, and the hostPath.

This will be on the hostPath /opt/data and let's create the PersistentVolume using kubectl create -f pv.yaml. Let's now see what we have in PVC. So in PVC, it's a simple PersistentVolumeClaim Kubernetes object, and StorageClassName is local-storage, accessMode, ReadWriteOnce, same as the PV. And then, the resource request is less than the capacity of the PV, so we are good over here. Let's create it with kubectl create -f pvc.yaml, we can verify the creation using kubectl get pv, pvc. PV and PVC are there, but the status is pending. This is because, unless and until we create a pod, the status will remain pending. Now, let's look at the pod configurations. The pod is also a simple pod where, in the volume section, we have defined a PersistentVolumeClaim and the claim name that we just created, and a simple nginx container with pod and it's mounted as /usr/share/nginx/html folder inside the pod.

Let's create the pod with the kubectl create -f pod.yaml command. And now, let's look at the PV and PVC and we can see the status immediately has turned to bound. We can use the command kubectl get pods -owide to see where it's running. So it's running on node3. And let's exec the pod using kubectl exec -it task-pv-pod. Let's create a file in nginx/html. We can go to the directory using cd /usr/share/nginx/html and then insert the command echo "hello" >> index.html. If we use the command curl localhost, we get hello. And this particular thing is running on node3. So, I already have node3 opened, and let's go to cd /opt/ and cd data/. We can see index.html has come over here. This is how the PersistentVolume and the PVC are created when we have to use the local storage. The bounding happens when the pod is created. Now, we will exit and delete the pod using kubectl delete pod task-pv-pod --force --grace-period 0. Now, though we have deleted the pod, the PV and the PVC will still stay because the PersistentVolume and PersistentVolumeClaim are the volumes that are separate from the pod life cycle.

The PersistentVolume object and the PersistentVolumeClaim object are separate Kubernetes resources that Kubernetes manage, and it is separate from the pod life cycle. If we use the command kubectl get pv, pvc, we can see both are still there and are already bound. Since the reclaim policy is retained, the PV will remain even if the PVC is deleted. Let's delete that using kubectl delete persistentvolumeclaim/demo-pvclaim. The PVC is deleted, and now, if we use kubectl get pv, we should still have the PV with the released status. This is how the local volumes work in Kubernetes, and you can create the storage-class, local-storage class and use that to create the PV, PVC manually, and the PV and the PVC won't bound unless another pod is created. That's it for this lecture. Thank you for watching. See you in the next one.