Introduction to Google Kubernetes Engine

Filed Under: Random
Introduction To Google Kubernetes Engine

Hello, readers. This article talks about the Introduction to Google Kubernetes Engine and various aspects of the same.

So, let us begin!! 馃檪

Also read: Run Apache Cassandra on Kubernetes with Statefulset


Need of Kubernetes Infrastructure

Long back, the usual and heavy applications sat on data centers. Then came the concept of Virtual Machines to host the application and related configuration files. This method could not withstand the increasing demands of the agility of the application.

That is, as the application plans to move ahead with increasing scalability in their workflow, the virtual machines had cost them at the stake of resources and incurred extra charges on top of it to maintain the same.

This is when Containers came into the picture.

With Containers came flexibility in terms of cost optimization. The applications can now be hosted on containers and will be charged for the resources they use rather than the block.

Also, containers make the application a light-weighted one and increase scalability across the architecture. They are considered a fairly easy way to have the applications deployed onto the cloud with minimal maintenance. They have compatibility with the operating systems that we choose for our applications.

There are various Kubernetes infrastructure providers such as Google, Amazon, Azure, etc.

This blog specifically focuses on the Kubernetes service provided by Google – Google Kubernetes Engine.


Why Google Kubernetes Engine?

When we plan to have our workloads or application to be shifted to Cloud, it is essential that we choose the right platform by analyzing the compatibility and need of our application.

With Google Kubernetes Engine comes the flexibility to have the workloads deployed and interlinked with various other cloud resources or on-prem platform resources.

This flourishes the productivity of the applications as all the configurations with regards to an application will stay within the container via Google’s internal infrastructure.

With Google Kubernetes Engine, comes a high level of security, and reliability with respect to the infrastructure.

In comparison to all the public cloud vendors, Google Kubernetes Engine proves out to be the cheapest and most flexible provider.

In return, Google Kubernetes Engine supports high flexibility with autoscaling enabled for the nodes within the Kubernetes Cluster.


Features of Google Kubernetes Engine

  • Synced Monitoring and Logging of the application : Google Kubernetes Engine offers essential logging and monitoring of the applications over the containers through Stackdriver or Cloud Logging. We can push the application logs in the form of stderr and stdout and then have it monitored through Stackdriver.
  • Google Kubernetes Engine provides horizontal autoscaling of resources in terms of Pods as well as nodes based on the CPU consumption. It also offers vertical autoscaling for the Virtual Machines by increasing the resources of the Virtual machines to add power to them to grow and have more compatibility.
  • Google Kubernetes Engine is a fully managed Cluster environment. The manager nodes and taken care by the Google SREs and we only look after the worker nodes from app perspective.
  • Auto upgrade and auto repair : Google Kubernetes Engine offers us with the policies of auto upgradation of the Kubernetes/cluster version as well as the auto repair options for the nodes running within the cluster.

Modes of Operation :: GKE

In Google Kubernetes Engine, all the application-specific deployments is a Workload. When we plan to onboard our application in GKE, we choose to create a cluster based on one of the below options-

  1. Autopilot mode : This mode is more managed by Google. Google takes care of the management of nodes as well as the cluster infrastructure. In this mode, majority of the features is available through CMD. And the choice of operating system trickles down to just two.
  2. Standard mode : In this case, the user has sole control over the management of the nodes as well as the Kubernetes cluster components. This gives us the flexibility to have customization at the Kubernetes level through Infrastructure-as-a-Code easily.

Run time Applications of GKE

  • We can have Ingress controllers such as nginx, istio through the Kubernetes.
  • Through Logging, we can easily debug the containers running within Google Kubernetes Infrastructure. It even provides us with the option of live and historic data logging.
  • We can provision Load balancers within the Kubernetes subnet ranges easily.
  • At times, depending upon the load, we can resize the containers running within the cluster easily.

Storage options in Google Kubernetes Engine

There are various storage solutions available in GKE based on the application data requirement-

  1. For the persistent data storage, Google Kubernetes Engine offers us a way to connect and have static persistent and dynamic persistent volume solutions backed up through storage classes, standard persistent disks as well as scalable Virtual machines.
  2. We also have Google CloudSQL and Cloud Spanner connections available as database storage over the cloud.
  3. Google offers us with Google Container registry to have our docker images being stored.

Conclusion

This marks the end of this topic. Feel free to comment below in case you come across any questions.

For more such posts related to Kubernetes, Stay tuned with us.

Till then, Happy Learning!! 馃檪

Leave a Reply

Your email address will not be published. Required fields are marked *

close
Generic selectors
Exact matches only
Search in title
Search in content