Deploying a Stateless application through Kubernetes Deployment

Filed Under: Random
Deploying A Stateless Application Through Kubernetes Deployment

Hello, readers! This article talks about Deploying a Stateless application through Kubernetes Deployment with a practical example.

So, let us begin!! 馃檪

Also read: Bootstrapping a Kubernetes cluster using Kubeadm

Understanding the process of having an application as a container over the cloud

Docker came up with the concept of having an application as a container over the cloud behind servers and virtual machines. Container brings the concept of compatibility and liveliness along with it to us. But what if the number of containers keeps increasing as we get more applications to the Docker infrastructure?

In this scenario, the management of the containers goes for a toss. This is when Kubernetes comes to the rescue.

Kubernetes offers us the management of containers by packing them into pods. We consider a pod as an individual and smallest running instance of an application.

To have a pod being provisioned, Kubernetes offers us the concept of Deployments. A deployment is a workload that packs our underlying image and all the requirements of the application and creates pods in the infrastructure.

Now, we will be provisioning such a stateless application through Kubernetes Deployment in the upcoming section.

Provisioning a Stateless application through Kubernetes Deployment

Now, we will be having a look at deploying a stateless application in the Kubernetes container environment.


You would need to have a Kubernetes cluster on your workstation. Make sure you have the kubectl tool installed to connect to the cluster through the command line.

Once all the prerequisites are clear, we can proceed with the deployment of the application.

Deploying a Stateless Application

In this article, we will be making use of the Nginx application for deployment.

Have a look at the below example-

apiVersion: apps/v1
kind: Deployment
  name: nginx
      app: nginx
  replicas: 2 
        app: nginx
      - name: nginx
        image: nginx:1.13
        - containerPort: 80

Through the above deployment file, we instruct to create an Nginx application that runs on nginx:1.13 as the base image over container port 80.

Let us now create the deployment file-

kubectl apply -f deploy.yaml -n namespace-name

Once we have the deployment file created, it spins up pods that are live instances of the application.

kubectl get pods


          NAME                      READY     STATUS    RESTARTS   AGE
 nginx-deployment-17heduy           1/1       Running   0          12s
 nginx-deployment-18azwds           1/1       Running   0          12s

Let us now try to scale the application!

Scaling the application!

In order to increase the live instances of the application, that is, pods, we can scale up the replica sets to have a more stable number of pods running within the cluster.

The easiest way to do is the imperative way-

kubectl scale replicas=3 deploy/nginx -n namespace_name

This scales up the application pods from 2 to 3, through which now will be having three live instances of the application as up and running to manage the load.

Delete a Deployment

In scenarios when we plan to pull down the current deployment or configuration for the application, we can easily do so by Kubernetes delete process.

The delete command enables us to delete the Kubernetes deployment at ease.

kubectl delete deployment nginx

This command deletes all the pods associated with the deployments and thus the entire application is pulled down from within the Kubernetes infrastructure.


By this, we have reached the end of this topic. Feel free to comment below, in case you come across any questions.

For more such posts related to Docker and Kubernetes, Stay tuned with us.

Till then, Happy Learning!!

Generic selectors
Exact matches only
Search in title
Search in content