Kubernetes Pod Overhead – Way to separate resource consumption

Filed Under: Random
Kubernetes Pod Overhead

Hello, readers! This article talks about Kubernetes Pod Overhead with examples.

So, let us begin to understand the same. 馃檪


Introduction – Pod Overhead in Kubernetes

As we have discussed earlier, Kubernetes provides the entity known as Pod which runs containers within it. We can imaging Pod as the smaller running instance of the application.

To run an application smoothly within the container, we need to provide the required resources to the container in terms of CPU and memory.

The resource allocation trickles down as follows-

At first, we would need a node (Virtual Machine) with the necessary CPU and memory to hold the applications in the backend (in the form of pods). Post which, a calculative amount of resources needs to be provided to the pod which is responsible for holding containers. We then inject resources into the containers for them to run the application scripts.

From the above explanation, one thing that comes to mind is that the pods would also require resources to run themselves within the cluster.

This is when the concept of Pod Overhead comes into the picture.

Pod Overhead is a feature for the accountability of the resources that a pod consumes to run itself apart from the container CPU and memory values. With this feature, we can have an account and check on the resources that the pod uses to support its own run.

When we enable Pod Overhead for any pod within the cluster, the overhead is a cumulative sum of the resources of the container that runs within the pod as well as the resources that the pod would need to run itself.


Defining Pod Overhead for an application

To introduce a Pod Overhead feature for your application pod, we need to define a RuntimeClass with the overhead field.

We define the resource consumption limits for every pod within the cluster through RuntimeClass as shown below-

kind: RuntimeClass
apiVersion: node.k8s.io/v1
metadata:
    name: demo-rc
handler: demo-rc
overhead:
    podFixed:
        memory: "500Mi"
        cpu: "300m"

So, 500Mi per pod will be utilized for the virtual machine and guest OS and 300m to run the processes within it.

In order for the pod to implement the above resource limits, we would need to specify the handler within the deployment/pod YAML –

apiVersion: v1
kind: Pod
metadata:
  name: demo-pod
spec:
  runtimeClassName: demo-rc
  containers:
  - name: nginx
    image: nginx:alpine
    resources:
      limits:
        cpu: 250m
        memory: 100Mi

The moment we include RuntimeClass in the deployment and apply it, the Pod Overhead would account for the combination of the resources consumed by containers and the pod.

The RuntimeClass being enabled gets updated in the Pod specifications-

kubectl get pod demo-pod -o jsonpath='{.spec.overhead}'

Output-

map[cpu:300m memory:500Mi]

Pod Overhead interpretation by kubelet

Consider we have Pod Overhead feature enabled for an application pod. In this scenario, when we try to spin up the pod, the Kube-scheduler while deciding which node to let the pod sit on. It considers the Pod Overhead as well.

It adds the resource limits of the pod as well as the containers within it.

Once the Pod gets scheduled on any node, the kubelet running on the node automatically creates a new cgroup for that particular Pod. It also sets an upper limit in terms of resources for the pod. This upper limit is the summation of the resources of the containers as well as the Pod Overhead values.

This upper limit is set on the node by the kubelet for the particular pod.


Conclusion

By this, we have come to the end of this topic. Feel free to comment below, in case you come across any questions.

For more such posts related to Kubernetes, Stay tuned with us.

Till then, Happy Learning!! 馃檪

Leave a Reply

Your email address will not be published. Required fields are marked *

close
Generic selectors
Exact matches only
Search in title
Search in content