Debugging Kubernetes Application-level issues

Filed Under: Random
Debugging Kubernetes Application Level Issues

Hello, readers! This article talks about the Debugging Kubernetes Application-level issues considering various debugging scenarios.

So, let us begin!! 馃檪


Actions post the Go-Live of an Application in Kubernetes

Kubernetes provides a vast platform to host applications in the form of containers and provides efficient ways to manage those containers on a broader scale.

Now, once the application is packaged and is up and running within a container, it is a necessary step to maintain the state of the containers, that is, to ensure that the application is always up and running.

For the same, we need to make sure that all the relevant resources that contribute to the functioning of the application are healthy.

This article talks about Debugging the issues with the application-level resources. For example, Pods, Services, worker nodes, secrets, etc.

As we have read earlier in our previous Kubernetes Pod article that a Pod is the smallest and live instance of our application, it makes it essential to observe and keep the pod healthy.

Once our application is running, it is essential for us to debug the issues around it which is inevitable.


Describing the events of a Pod

The basic and the first step in the debugging of any application is to have a look at the current description of a pod.

Example-

kubectl describe pod pod-name -n namespace-name

The description gives us a lot of information about the pods such as –

  • number and type of containers running within the pod
  • Labels associated with the pod
  • Information regarding the status of the container
  • Readiness and Liveliness probe of the pod
  • Restart count of the Pod
  • Events related to the lifecycle of the pod, etc.

The state of the containers can be one of the following-

  1. Waiting
  2. Running
  3. Terminated

The state can help us debug more in the scenarios of any issue.

The ready state of the container depicts and confirms that it has passed its last readiness probe. The restart count explains to us the number of times a container has been restarted.


Checking for the logs generated by a Pod

Apart from the description, we can also have a look at the logs of the pod to know more about the events that are recorded as a part of application-level transactions.

Example-

kubectl logs -f pod-name -n namespace-name

Now, let us have a look at some of the more common debugging scenarios in the below section


1. Debugging Scenario 1 – Pending Pods

One of the most common scenarios that we can detect post provisioning a Pod is that a Pod goes into a Pending state.

This can be because of one of the following reasons-

  1. A Pod is requesting for resources that are not available within the node.
  2. A pod specifies a label selector that is not present on any of the nodes.
  3. Pod does not have any tolerations defined to tolerate the taints present on the node.
  4. The resources on the namespace level are exhausted.

In any of the above scenarios, the pod would remain in a Pending state until the resolution is set.

In such a situation, we can describe the pod to get the details with regard to the events. The events would mention-

  1. FailedScheduling
  2. cpu limits exhausted
  3. does not find node with the lable
  4. Does not tolerate the node with the taint

At this moment, debugging becomes pretty easy depending upon the above scenarios. We need to check for the labels or look for resources or taints-tolerations, etc.


2. Debugging Scenario 2 – Unreachable node

At times the node on which the pod is trying to schedule itself becomes unreachable. To check the same, we can execute the below command-

kubectl get nodes

Output-

NAME                     STATUS       ROLES     AGE     VERSION
kubernetes-node-1     NotReady     <none>    1h      v1.13.0
kubernetes-node-3     Ready        <none>    1h      v1.13.0
kubernetes-node-5     Ready        <none>    1h      v1.13.0
kubernetes-node-2     Ready        <none>    1h      v1.13.0

The NotReady state of the node depicts that the node is not available for the pod to schedule itself on.

We can even describe the node to know more about the error/issue-

kubectl describe node node-name

Post which, we can take the necessary action by logging on to the node, and the last step is to restart the kubelet service as follows-

systemctl restart kubelet

Conclusion

By this, we have approached the end of this topic. Feel free to comment below, in case you come across any questions.

For more such posts related to Docker and Kubernetes, Stay tuned with us.

Till then, Happy Learning!! 馃檪

Leave a Reply

Your email address will not be published. Required fields are marked *

close
Generic selectors
Exact matches only
Search in title
Search in content