How to Connect Kubernetes Containers? – An Easy Introduction

Filed Under: Random
How To Connect Kubernetes Containers

Hello, readers! This article talks about the Connecting Kubernetes Containers with examples.

So, let us begin! 馃檪


Connect Kubernetes Containers – Overview

Before understanding the connection through Kubernetes Services, it is essential for us to understand the communication string between containers/pods.

In the original case, Docker allows containers to talk to other containers only if they reside on the same node/machine. This is termed host-private networking. So, in order for the containers to communicate to other containers over different nodes, we need to allocate and open ports on every node’s IP address which will then forward the request of communication to the containers.

By default, Kubernetes allows the pod to pod communication across all the nodes. Every pod gets assigned an IP address that is ephemeral in nature. That is, it dies with the death of the pod. As the IP address gets assigned at the moment of creation of pods, we don’t need to explicitly define ports and all the pods can have connectivity within the cluster.


Issue with the connection through Pods

In the below example, we have created an Nginx pod with a container Port specification to port 80.

Example:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-demo
spec:
  selector:
    matchLabels:
      run: nginx
  replicas: 1
  template:
    metadata:
      labels:
        run: nginx
    spec:
      containers:
      - name: nginx-container
        image: nginx:alpine
        ports:
        - containerPort: 80


In this scenario, our pod becomes accessible from any node within the cluster.

kubectl get pods -l run=nginx -o wide

Output-

NAME                        READY     STATUS    RESTARTS   AGE       IP            NODE
nginx-887628404a2           1/1       Running   0          29s       10.244.3.7    lwr01

Now, let us have a look at the Pod IPs:

kubectl get pods -l run=nginx -o yaml | grep podIP

Pod IP-

 podIP: 10.244.3.7

We can now get into any of the above nodes (ssh into it) and should be able to curl to the IPs easily. In the current scenario, the container within the pod is not making use of any port (80) on the worker node nor is the traffic being routed through any service to the pod. This means we can run multiple such pods on the same worker node with the same container port and yet access and connect to it from any other pod or node within the cluster.

Real Issue with a connection through Pods…

But as mentioned this entire scenario is ephemeral in nature. That is, if the node dies off, the pod goes away and so does the IP. When we redeploy our workload, it will create a new pod with a brand new IP to it.

To solve this issue, we have Kubernetes Services in the picture!!


Kubernetes Service for the rescue!

Kubernetes Services solves the issue of the Pod IPs being ephemeral in nature. The moment we create a Kubernetes service within the cluster, it gets an IP assigned to it, mostly known as ClusterIP.

A Service has this IP bound to it until the entire lifespan of the service. That is, the IP stays as long as the service is a part of the cluster. We can now attach and configure pods to use services defined using selectors and labels. This way, we can configure a pod to talk to the service, and internally that service would forward the request to a pod that is using the defined service.

In the below example, we have defined service with a target port 80 on any pod which follows the label demo.

service.YAML

apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
  labels:
    run: nginx
spec:
  ports:
  - port: 80
    protocol: TCP
  selector:
    run: nginx

kubectl apply -f service.yaml

Remember, a targetport is the port on which the container accepts traffic. While a port is the one other pods use to access the defined Service.

kubectl describe svc nginx-svc

Output:

Name:                nginx-svc
Namespace:           default
Labels:              run=nginx
Annotations:         <none>
Selector:            run=nginx
Type:                ClusterIP
IP:                  10.0.163.151
Port:                <unset> 80/TCP
Endpoints:           10.244.1.7:85
Session Affinity:    None
Events:              <none>

Usually, a Service is supported by a group of pods at the backend. These pods are exposed through endpoints.

kubectl get ep nginx-svc

Endpoints-

10.244.1.7:85

The moment a pod dies, it will be deleted from the endpoints and a newly created pod that matches the labels of the service will be added to it.

Even if the pod dies off, the endpoints and service IP remains the same.


Conclusion

By this, we have reached the end of this topic. Feel free to comment below, in case you come across any questions. For more concepts related to Kubernetes, Stay tuned with us.

Till then, Happy Learning!! 馃檪

Leave a Reply

Your email address will not be published. Required fields are marked *

close
Generic selectors
Exact matches only
Search in title
Search in content