Deploy ElasticSearch in Kubernetes: Practical Implementation

Filed Under: Random
ElasticSearch

Hello, readers! This article explains a way to Deploy ElasticSearch in Kubernetes with practical exposure to it.

So, let us begin!! 馃檪


Purpose of ELK stack in Cloud-native Logging

When the era of virtual machines came into the picture, the folks started moving applications to the virtual machines for better scalability and reliability of the applications.

Usually, when the apps were deployed on a single server, the logs got accumulated on a certain path like /var/lib/programs/error.log, etc. T

his was pretty okay with a limited number of servers. With the increase in the VM sizes (horizontal scaling), it became difficult to maintain logs as well.

Imagine you have like 4-5 servers within the same cluster for various applications, maintaining logs was a task of concern back then.

Now, let us get back to this day’s picture.

The servers have been replaced with cloud providers and orchestration platforms. Kubernetes offers us to host our platform over containers. Various kinds of applications get hosted and generate different kinds of logs. To orchestrate all of it, we need a centralized format to manage logs of various applications in the backend infrastructure.

This is when ElasticSearch and Logstash come into the picture for the rescue.

With ELK stack, we can have a log aggregated system within our infrastructure.

ElasticSearch is the place where the data is available. It runs as a cluster of nodes to accommodate the data. With Logstash, we can transform the logs into a suitable format. It then moves in the database (ElasticSearch cluster).

Finally, Kibana can help us visualize the data from ElasticSearch through the APIs offered.

In this article, we will be having a look at the way to deploy ElasticSearch into the Kubernetes cluster.


Deploy ElasticSearch in Kubernetes

In order to have the ElasticSearch component into the Kubernetes cluster, we need to create the following Kubernetes resources-

  1. Service Account
  2. Cluster Role and Cluster Role binding
  3. Statefulset (ElasticSearch cluster)

At first, we will be defining the service account that will be used to have the elasticsearch cluster in place.

ServiceAccount.YAML

apiVersion: v1
kind: ServiceAccount
metadata:
  name: elasticsearch-logging
  namespace: kube-system
  labels:
    k8s-app: elasticsearch-logging

Post creation of the service account, we initiate the process of binding it to a cluster role and role binding to make it have the escalated permissions. The service account needs to get access services, namespaces and endpoints.

ClusterRole.YAML & ClusterRolebinding.YAML

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: elasticsearch-logging
  labels:
    k8s-app: elasticsearch-logging
rules:
- apiGroups:
  - ""
  resources:
  - "services"
  - "namespaces"
  - "endpoints"
  verbs:
  - "get"
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: kube-system
  name: elasticsearch-logging
  labels:
    k8s-app: elasticsearch-logging
subjects:
- kind: ServiceAccount
  name: elasticsearch-logging
  namespace: kube-system
  apiGroup: ""
roleRef:
  kind: ClusterRole
  name: elasticsearch-logging
  apiGroup: ""

Having all the pre-requisites cleared, let us now deploy the ElasticSearch cluster into the Kubernetes infrastructure.

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: elasticsearch-logging
  namespace: kube-system
  labels:
    k8s-app: elasticsearch-logging
spec:
  serviceName: elasticsearch-logging
  replicas: 2
  updateStrategy:
    type: RollingUpdate
  selector:
    matchLabels:
      k8s-app: elasticsearch-logging
  template:
    metadata:
      labels:
        k8s-app: elasticsearch-logging
    spec:
      serviceAccountName: elasticsearch-logging
      containers:
      - image: elasticsearch:6.8.4
        name: elasticsearch-logging
        ports:
        - containerPort: 9200
          name: db
          protocol: TCP
        - containerPort: 9300
          name: transport
          protocol: TCP
        volumeMounts:
        - name: elasticsearch-logging
          mountPath: /data
        env:
        - name: "NAMESPACE"
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
      volumes:
      - name: elasticsearch-logging
        emptyDir: {}
      initContainers:
      - image: alpine:3.6
        command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"]
        name: elasticsearch-logging-init
        securityContext:
          privileged: true

In the above example, we have made use of emptyDir as the volume. We may want to use persistent volume in the real scenario for the testing.

Now in order to access the ElasticSearch database, we need to create a Service in the Kubernetes infrastructure. We will be accessing the database on 9200 port.

ElasticSearch_Service.YAML

apiVersion: v1
kind: Service
metadata:
  name: elasticsearch-logging-svc
  namespace: kube-system
  labels:
    k8s-app: elasticsearch
spec:
  ports:
  - port: 9200
    protocol: TCP
    targetPort: db
  selector:
    k8s-app: elasticsearch

Post creation of the service, as the last step, we need to perform port-forwarding for the traffic to get forwarded on the localhost on hitting the URL.

Port forwarding–

kubectl port-forward -n kube-system svc/elasticsearch-logging-svc 9200:9200

Conclusion

By this, we have approached the end of this topic. Feel free to comment below, in case you come across any questions.

For more such posts related to Kubernetes and stack, Stay tuned with us.

Till then, Happy Learning!! 馃檪

Leave a Reply

Your email address will not be published. Required fields are marked *

close
Generic selectors
Exact matches only
Search in title
Search in content