Understanding container resource requests and limits

Saiyam Pathak avatar
By Saiyam Pathak
Director of Technical Evangelism

Description

Discover how container resource requests and limits work and what we can do to maintain them.


Transcription

Introduction to container resource requests and limits

In this lecture, we'll go through container resource requests and limits. They fall under the resources section of the containers. Requests mean these are the guaranteed resources that a container will get. Limits define the upper bound beyond which the container will not go.

How can we maintain container requests and limits?

Now, if we ask for more CPUs, like in this case, in the argument section of the yaml file, you can see the container is trying to ask for two cores, and the limit is one. What will happen is Kubernetes will throttle the CPU of this particular container. We can run the container using the kubectl apply -f cpumem.yaml command. The container is created, and we can verify it through the kubectl get pods command. We can see that the container is running. Now we can describe the pod using the kubectl describe pod cpu-mem-demo command, and we can see that the request for the CPU we defined is 500 millicores, and the CPU is 1, and they both are there in the describe section. In the meanwhile, the metrics get generated.

Let's do another scenario where we'll try to request the CPUs which are not even there for the nodes. If we use the command kubectl top node from the metric server, you can see that the CPU percentage is around 30% use, meaning the nodes are 2,000 millicores, meaning a 2-core CPU. So, what we'll do is we'll ask for three cores.

We will edit the yaml file with the command vi cpumem.yaml, and change the name to 2. Here, we'll change the three values of the CPU to 3. Here, we're asking for three CPUs and giving the guaranteed ones as three. Hence, ideally, what should happen is this should not even get scheduled. Finally, we will run the pod with the new edits through the kubectl apply -f cpumem.yaml command. We have the pod created, and it is pending. Now, let's just see the reason for that by describing the pod.

We will use the command kubectl describe pod cpu-mem-demo2, and we can see that the reason for the pending is FailedScheduling due to insufficient CPU. Also, that should be the ideal behavior because we only have two-core VM boxes. Now, what we'll do is we'll delete this particular demo2 pod which we just created by using the "kubectl delete pod cpu-mem-demo2 --force --grace-period=0" command, and we'll try to get the top pod for that using the kubectl top pod command. You can see that the CPU cores are coming as 999 millicores, less than the cap we defined. If we again use the kubectl describe pod command, you can see the cap we defined was one, and even if we are asking for two from the YAML file, we'll still be under one, so it has throttled the CPU.

Now, what we'll do is let's go to our memory demo. We will just comment on this particular section because CPU 1 is not needed for this particular demo, and we'll also comment on this. Here is what we are trying to do: the request for this memory for this particular container is 100 millibyte, which is 100 megabytes, and the limit is 200, and we are trying to ask for 250. So we are trying to ask for 250, which is not even the requested amount.

Let's see what happens over here. We will run the pod through the kubectl apply -f cpumem.yaml command. Next, to verify, we can use the kubectl get pods command, and it's in the ContainerCreating state. It shows OOMKilled. When we try to ask for something that is not there, it gets OOMKilled. We will use vi cpumem.yaml and look at it again. We tried the CPU. The container is trying to ask for 250 megabytes, but it's not in the request section, so it goes out of memory.

Now, what we'll do is we'll try just to give 25. It will ask for 25, and the minimum request that a container gets is 100, and the limit is 200. Hence, this should run fine. In the meanwhile, we should delete it first. Therefore, we will use the kubectl delete pod cpu-mem-demo2 --force --grace-period=0 command, and let's apply cpumem, and we will see that the pod is created. If we verify the pod by the kubectl get pods command, we can see the pod is now running.

This is how you can actually set the resource requests and limits. It's very important because whenever you're trying to schedule a particular pod with specific resources, you have to take care, like whatever resources you have and how efficiently you can use the pods to use the resources that you already have in the nodes. So with this, this was about the CPU and the memory resources and request limits for a pod. Thank you for watching, see you in the next lecture.

Don't stop now, check out your next lesson