Kubernetes Liveliness Probe – All you need to know!

Filed Under: Random
Kubernetes Liveliness Probe

Hello, readers! In this article, we will be focusing on Kubernetes Liveliness Probe in a detailed manner.

So, let us get started!


What is Liveliness probe in k8?

Kubernetes provides us with Pods and containers to host our applications in an isolated manner. In the same scenario, it is always essential to have a health check of the containers that are running the applications in instances.

Once the application instance starts running within a container, it may encounter situations when the container is running but the application has reached a deadlock situation. That is, the scenario when the application is up but fails to make progress. For the same, we need some kind of probe to alert the situation and also resolve the deadlock.

This is when the Liveliness probe comes into the picture.

The liveliness probe enables us to detect the health of the container, deadlocks within the running application, and then perform a restart to the container that holds the application. This is very helpful in scenarios when the application keeps running in a semi-broken state for a longer period of time. It definitely requires a restart to fix the case and have the application in a fully functional state.


Configuring a Liveliness probe to a Kubernetes Pod

It is now the time to configure the liveliness probe to our container so that we can have a health check for our application once it starts running in full capacity.

In the below example, we have created a pod definition file with the below specifications:

  • pod name: nginx-pod
  • container name: nginx-liveliness-detect
  • The nginx image is used as the base configuration for the container.

Example:

The initialDelaySeconds parameter specifies the kubelet to wait for 5 seconds before executing the first cycle of the liveliness probe.

Also, while performing the probe, the kubelet executes the command cat /tmp/live_probe. If the commands return zero as the output, the container is considered healthy and running. If it returns a non-zero value, the kubelet kills the container and then restarts it.

Once the container starts running, it executes the below command:

/bin/sh -c "touch /tmp/live_probe; sleep 30; rm -rf /tmp/live_probe; sleep 100"

So, according to the above command, the container seems healthy for the first 30 seconds of the lifecycle.

For the duration of the first 30 seconds, the container has a file being created (/tmp/live_probe). That is the reason, the command cat /tmp/live_probe returns success (value = 0).

Post 30 seconds, the command returns a non-zero value indicating failure of the health of the container.

pod.YAML

apiVersion: v1
kind: Pod
metadata:
  labels:
    test: liveness-prob-exec
  name: nginx-pod
spec:
  containers:
  - name: nginx-liveness-detect
    image: nginx
    args:
    - /bin/bash
    - -c
    - touch /tmp/live_probe; sleep 30; rm -rf /tmp/live_probe; sleep 100
    livenessProbe:
      exec:
        command:
        - cat
        - /tmp/live_probe
      initialDelaySeconds: 5
      periodSeconds: 10

kubectl apply -f pod.yaml

Events during the first 30 seconds of the container lifecycle:

The below events (during the first 30 seconds indicates that the container is healthy i.e. liveliness probe has not failed yet).

kubectl describe pod nginx-pod

FirstSeen    LastSeen    Count   From            SubobjectPath           Type        Reason      Message
--------- --------    -----   ----            -------------           --------    ------      -------
14s       14s     1   {default-scheduler }                    Normal      Scheduled   Successfully assigned nginx-pod to node1
13s       13s     1   {kubelet node1}   spec.containers{liveness}   Normal      Pulling     pulling image "nginx"
13s       13s     1   {kubelet node1}   spec.containers{liveness}   Normal      Pulled      Successfully pulled image "nginx"
13s       13s     1   {kubelet node1}   spec.containers{liveness}   Normal      Created     Created container with docker id 76849c02312e
13s       13s     1   {kubelet node1}   spec.containers{liveness}   Normal      Started     Started container with docker id 76849c02312e

Events after 40 seconds:

As the file is no more present in the directory (post 30 seconds), the liveliness probe fails and returns the container to be unhealthy, restarts it.

FirstSeen    LastSeen    Count   From            SubobjectPath           Type        Reason      Message
--------- --------    -----   ----            -------------           --------    ------      -------
39s       40s     1   {default-scheduler }                    Normal      Scheduled   Successfully assigned nginx-pod to node1
37s       37s     1   {kubelet node1}   spec.containers{liveness}   Normal      Pulling     pulling image "nginx"
37s       37s     1   {kubelet node1}   spec.containers{liveness}   Normal      Pulled      Successfully pulled image "nginx"
37s       37s     1   {kubelet node1}   spec.containers{liveness}   Normal      Created     Created container with docker id 76849c02312e
39s       36s     1   {kubelet node1}   spec.containers{liveness}   Normal      Started     Started container with docker id 76849c02312e
4s        4s      1   {kubelet node1}   spec.containers{liveness}   Warning     Unhealthy   Liveness probe failed: cat: can't open '/tmp/live_probe': No such file or directory

The moment it detects the unhealthy state of the container, the kubelet restarts the container.

NAME        READY     STATUS    RESTARTS   AGE
nginx-pod   1/1       Running   1          2m

As seen above, the container has restarted once due to the liveliness probe checks.


1. TCP Liveliness Probe

In the case of the TCP liveliness probe, we make use of a TCP socket. That is, the kubelet makes an attempt to open a TCP socket connection with the container on the specified port.

Example:

apiVersion: v1
kind: Pod
metadata:
  labels:
    test: liveness-prob-exec
  name: nginx-pod
spec:
  containers:
  - name: nginx-liveness-detect
    image: nginx
    ports:
    - containerPort: 8080
    livenessProbe:
      tcpSocket:
        port: 8080
      initialDelaySeconds: 15
      periodSeconds: 20

In the above example, the kubelet will make an attempt to connect to the container on port 8080. If the connection fails, it will restart the container.


2. HTTP Liveliness Probe

Apart from the basic liveliness probe, we can also have an HTTP GET request as the liveliness probe parameter.

Example:

As an extension to the above pod.YAML file, we have set the liveliness probe as an HTTP GET request. Wherein, the kubelet sends an HTTP GET request to the server running inside the container listening on the port 8080.

Further, if the server’s handler /health/check returns a success code (value greater than or equal to 200 and less than 400), the container will be considered alive and healthy. If the handler returns a failure code, the kubelet then restarts the container.

apiVersion: v1
kind: Pod
metadata:
  labels:
    test: liveness-prob-exec
  name: nginx-pod
spec:
  containers:
  - name: nginx-liveness-detect
    image: nginx
    args:
    - /server
    livenessProbe:
      httpGet:
        path: /health/check
        port: 8080
        httpHeaders:
        - name: Trailer-Header
          value: Healthy
      initialDelaySeconds: 5
      periodSeconds: 10

Conclusion

By this, we have approached the end of this topic. Feel free to comment below, in case you come across any questions.

For more such posts related to Kubernetes, Stay tuned with us.

Till then, Happy Learning!! 馃檪

Leave a Reply

Your email address will not be published. Required fields are marked *

close
Generic selectors
Exact matches only
Search in title
Search in content