In my previous post, we seen how to configure kubernetes cluster,how to deploy pods and grow the cluster. Now in this post i am going to show how to resource limit cpu and memory in a kubernetes pod. We can also limit resource at namespace level, which will be covered in the later post.
I am going to use a special container image for this purpose vish/stress. This image has options for stress testing CPU and memory. So we are going to stress CPU and memory usage to certain limits and observe the behavior of the container and kubernetes nodes.
My configuration for Master and Worker node is a 4GB memory with 2 CPU cores each running in Virtualbox as a VM.
First, download the vish/stress image and run it.
vikki@drona-child-1:~$ kubectl run stress-test --image=vish/stress deployment.apps "stress-test" created
Wait till the pods status changes to running state.
vikki@drona-child-1:~$ kubectl get pods NAME READY STATUS RESTARTS AGE nginx-768979984b-mmgbj 1/1 Running 6 65d stress-test-7795ffcbb-r9mft 0/1 ContainerCreating 0 6s vikki@drona-child-1:~$ kubectl get pods NAME READY STATUS RESTARTS AGE nginx-768979984b-mmgbj 1/1 Running 6 65d stress-test-7795ffcbb-r9mft 1/1 Running 0 32s
This pod will not consume any memory and CPU by default. We can verify this by checking the logs of respective container image. In my case there is only one container running inside the pod stress-test-7795ffcbb-r9mft which is e8e43da13b23.
You can also use
"kubernetes logs stress-test-7795ffcbb-r9mft"(but it's not working in my server)
[root@drona-child-3 ~]# docker logs e8e43da13b23 I0819 13:01:00.200714 1 main.go:26] Allocating "0" memory, in "4Ki" chunks, with a 1ms sleep between allocations I0819 13:01:00.200804 1 main.go:29] Allocated "0" memory
It should show allocating 0 memory. Open a new terminal in the node where the stress pod is running and monitor the memory usage of the node
Limiting Memory and CPU for the pod
Now we are going to do 2 things
- Configure the pod to stress more memory and CPU usage
- Configure the resource limit for CPU and Memory in the pod.
Finally, we will monitor the behavior of the deployment.
To make it easier, we are going to export the current configuration of the deployment as a yaml file and edit it to add configuration for stress and resource limit.
vikki@drona-child-1:~$ kubectl get deployment stress-test -o yaml > stress.test.yaml vikki@drona-child-1:~$ vim stress.test.yaml
Now in the above example, i have added resources option(line 6 to 12) for resource limit and args options(line 13 to 21)for stress.
The 2 parameter under resources, the requests and limits are analogous to the soft and hard limit in Linux. The Kubernetes will place the pod in a node where there is enough resource to accommodate the requests value.
In case the usage of the pod exceeds the value of the limits, then the Kubernetes will do throttling or terminate or autoscale(using Horizontal Pod Autoscaler) depends upon the configuration and environment.
Understanding CPU and Memory resource limits in Kubernetes
|Resource||Units||Units suffix||when usage exeecds limits|
|CPU||1 Hyperthread on a bare-metal||m(mean milliCPU)||Throttled|
|Memory||Bytes||E, P, T, G, M, K, Ei, Pi, Ti, Gi, Mi, Ki||Terminated|
The requests and limits resource usages are calculated as sum of all the containers resource usage in the respective pod.
I configured the hard limit as 1 CPU and 4 GB of Memory using limits options. The pod is also configured to stress 2 CPU and 5050 MB(~5GB) of Memory.
My total memory in the worker node is only 4 GB.In the above example,I am over allocating memory just to check the behaviour. In real case you will be limiting the resources lesser than the available Memory/CPU, otherwise it makes no sense.
Now apply the new yaml to the deployment and wait for the pods to go running status.
vikki@drona-child-1:~$ kubectl replace -f stress.test.yaml deployment.extensions "stress-test" replaced vikki@drona-child-1:~$
vikki@drona-child-1:~$ kubectl get pods NAME READY STATUS RESTARTS AGE nginx-768979984b-mmgbj 1/1 Running 6 65d stress-test-7795ffcbb-r9mft 0/1 ContainerCreating 0 8s stress-test-78944d5478-wmmtc 0/1 Terminating 0 20m vikki@drona-child-1:~$ kubectl get pods NAME READY STATUS RESTARTS AGE nginx-768979984b-mmgbj 1/1 Running 6 65d stress-test-7795ffcbb-r9mft 1/1 Running 0 32s vikki@drona-child-1:~$
Check the logs in the container. Its trying to allocate 5050MB memory and 2 CPU.
[root@drona-child-3 ~]# docker logs a9ebee58d3ca I0819 13:36:36.618405 1 main.go:26] Allocating "5050Mi" memory, in "100Mi" chunks, with a 1s sleep between allocations I0819 13:36:36.618655 1 main.go:39] Spawning a thread to consume CPU I0819 13:36:36.618674 1 main.go:39] Spawning a thread to consume CPU
Open a new terminal and monitor the Memory usage.
The memory will slowly raises to full usage and finally drops .After some attempts the pod will go "Crashloopbackoff" status.
Memory value plotted in graph
vikki@drona-child-1:~$ kubectl get pods NAME READY STATUS RESTARTS AGE nginx-768979984b-mmgbj 1/1 Running 6 65d stress-test-57bb689598-8h4zm 0/1 CrashLoopBackOff 3 4m vikki@drona-child-1:~$
To verify the reason for container termination, we can run the describe pod. In our case it clearly say OOMKilled and was 26 times restarted.
vikki@drona-child-1:~$ kubectl describe pod stress-test-dbbcf4fd7-pqh99 |grep -A 5 State State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: OOMKilled Exit Code: 137 Started: Sun, 19 Aug 2018 21:23:01 +0530 Finished: Sun, 19 Aug 2018 21:23:24 +0530 Ready: False vikki@drona-child-1:~$ kubectl describe pod stress-test-dbbcf4fd7-pqh99 |grep -i restart Restart Count: 26 Warning BackOff 2m (x480 over 2h) kubelet, drona-child-3 Back-off restarting failed container vikki@drona-child-1:~$
I do not have any Horizontal Pod Autoscaler configured, the cluster is also running with only one node.