on
Resource limiting CPU and Memory in Kubernetes
In my previous post, we seen how to configure kubernetes cluster ,how to deploy pods and grow the cluster. Now in this post i am going to show how to resource limiting cpu and memory in a kubernetes deployment. We can also limit resource at namespace level, which will be covered in the later post.
I am going to use a special image vish/stress. This image has options for allocating cpu and memory, which can be parsed using an argument for doing the stress test.
My configuration for Master and Worker node is 4GB memory with 2 CPU cores each running in Virtualbox.
First download the vish/stress image and create the deployment. The image vish/stress provides option for creating stress in cpu and memory
Wait till the pods status changes to running.
Now verify the logs from respective container image. In my case the container name for pod stress-test-7795ffcbb-r9mft is e8e43da13b23.
You can also use "kubernetes logs stress-test-7795ffcbb-r9mft"
(but its not working in my server)
It shows allocating 0 memory. By default the stress image will not allocation any memory/cpu.
Limiting Memory and CPU for the pod
We are going to allocate more memory for this deployment and also going to set resource limit for CPU and Memory. Finally will monitor the deployment behaviour.
Export the yaml for the current deployment “stress-test” and add the resource limit option and Memory/CPU allocation option.
Now in the above example i restricted cpu to 1 cores and memory to 4GB using limits options. Also i added the argument to allocate 2 cores and memory of 5050MB(~5GB).
The requests and limits option is analogous to the soft and hard limit in Linux.
My total memory available in the worker node is only 4 GB,I am over allocating memory just to check the behaviour. In real case you will be limiting the resources lesser than the available Memory/CPU, otherwise it makes no sense.
Now apply the new yaml to the deployment and wait for the pods to go running status.
Check the logs in the container. Its trying to allocate 5050MB memory and 2 CPU.
Open a new terminal and monitor the Memory usage.
The memory will slowly raises to full usage and finally drops .After some attempts the pod will go “Crashloopbackoff” status.
Memory value plotted in graph
To verify the reason for container termination, we can run the describe pod. In our case it clearly say OOMKilled and was 26 times restarted.
Discussion and feedback