Hello, readers! This article talks about Dynamic Persistent Storage Solution in Google Kubernetes Engine with demonstration.
So, let us begin! 🙂
Storage in Kubernetes
Kubernetes offers us Pods to host our applications in the form of containers over the cloud. With applications comes the storage requirement for the data generated by them.
For the same, Kubernetes offers us two types of Storage options-
- Static Persistent Volume Storage: In this solution, the application teams need to create a persistent volume and claim the storage amount through a persistent volume claim. The only caveat of this method is that we need to decide and claim the amount of storage beforehand.
- Dynamic Persistent Volume Storage: With Dynamic provisioning of storage, the overhead to define the storage amount beforehand gets eliminated. Thus, the application teams can request for a particular amount of storage as and when needed.
Kubernetes Volumes are accessible in the below types-
- ReadWriteOnce (RWO): Only a single pod can write to the volume space at once. Volume cannot be shared amongst multiple nodes (pods).
- ReadOnlyOnce(ROO): Multiple Pods of the applications can have Read-only Access.
- ReadWriteMany(RWX): Multiple pods can utilize the volume and can Read/Write simultaneously.
In this article, we will be understanding how to Set up a Dynamic Storage Provisioner solution in Kubernetes.
NFS Ganesha Storage – Dynamic Storage Provisioner
NFS Ganesha Dynamic Deployment enables us to use the ReadWriteMany access mode for our volumes. With NFS Ganesha, we can have the backend storage/volume be shared by multiple applications.
At first, we provision a persistent disk as the volume storage for all the underlying applications. Moving ahead, we create a persistent volume and a persistent volume claim within the Kubernetes cluster to consume and block the storage.
Post which we introduce the NFS Ganesha set up on top of this infrastructure to have it sliced for the app teams.
Let us now understand this setup in a step-by-step manner.
Step 1: Creation of Persistent Disk
At first, we create a Persistent Disk (Zonal/Regional) for our NFS Ganesha solution as backend storage using the below command-
gcloud compute disks create storage-nfsdisk --size 2000Gi --region europe-west1
Step 2: Storage class for the Persistent Disk
Having created a disk of a particular size, we now move ahead to create a Storage class using the above created Persistent disk for the persistent solution to refer to.
Have a look at the below code-
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: pd-storageclass provisioner: pd.csi.storage.gke.io parameters: type: pd-standard replication-type: pd volumeBindingMode: Immediate reclaimPolicy: Retain
The above YAML creates a storage class for the Persistent Disk.
Step 3: Creation of a Persistent Volume in the cluster
We now create a Persistent Volume-based upon the above Storage class to block the disk space for the applications.
Have a look at the below code!
apiVersion: v1 kind: PersistentVolume metadata: name: pv-demo-nfs spec: storageClassName: "pd-storageclass" capacity: storage: 2000Gi accessModes: - ReadWriteOnce claimRef: namespace: default name: pvc-demo-nfs csi: driver: pd.csi.storage.gke.io volumeHandle: projects/demo-project/regions/europe-west1/disks/storage-nfsdisk
With this, we accommodate the disk space through the storage class.
Step 4: Provision a Persistent Volume claim for the PV
Having created a Persistent Volume, we now provision a persistent volume claim to claim the 2000Gi storage to be used by our NFS underlying configuration that will be set up in the next steps.
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-demo-nfs spec: storageClassName: "pd-storageclass" accessModes: - ReadWriteOnce resources: requests: storage: 1500G
Step 5: Set up NFS Ganesha resources and Deployment
We will now be setting up the NFS Ganesha on top of this infrastructure. The following resources will be deployed as a part of the NFS set up-
- Service Account
Have a look at the below code!
apiVersion: v1 kind: ServiceAccount metadata: name: nfs-provisioner-sa --- kind: Service apiVersion: v1 metadata: name: nfs-provisioner labels: app: nfs-provisioner spec: ports: - name: nfs port: 2049 - name: nfs-udp port: 2049 protocol: UDP - name: nlockmgr port: 32803 - name: nlockmgr-udp port: 32803 protocol: UDP - name: mountd port: 20048 - name: mountd-udp port: 20048 protocol: UDP - name: rquotad port: 875 - name: rquotad-udp port: 875 protocol: UDP - name: rpcbind port: 111 - name: rpcbind-udp port: 111 protocol: UDP - name: statd port: 662 - name: statd-udp port: 662 protocol: UDP selector: app: nfs-provisioner --- kind: Deployment apiVersion: apps/v1 metadata: name: nfs-provisioner spec: selector: matchLabels: app: nfs-provisioner replicas: 1 strategy: type: Recreate template: metadata: labels: app: nfs-provisioner spec: serviceAccount: nfs-provisioner-sa containers: - name: nfs-provisioner image: gcr.io/k8s-staging-sig-storage/nfs-provisioner:v3.0.0 ports: - name: nfs containerPort: 2049 - name: nfs-udp containerPort: 2049 protocol: UDP - name: nlockmgr containerPort: 32803 - name: nlockmgr-udp containerPort: 32803 protocol: UDP - name: mountd containerPort: 20048 - name: mountd-udp containerPort: 20048 protocol: UDP - name: rquotad containerPort: 875 - name: rquotad-udp containerPort: 875 protocol: UDP - name: rpcbind containerPort: 111 - name: rpcbind-udp containerPort: 111 protocol: UDP - name: statd containerPort: 662 - name: statd-udp containerPort: 662 protocol: UDP securityContext: capabilities: add: - DAC_READ_SEARCH - SYS_RESOURCE args: - "-provisioner=slb.com/nfs" env: - name: POD_IP valueFrom: fieldRef: fieldPath: status.podIP - name: SERVICE_NAME value: nfs-provisioner - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace imagePullPolicy: "IfNotPresent" volumeMounts: - name: app-vol mountPath: /nfs-data volumes: - name: app-vol persistentVolumeClaim: claimName: pvc-demo-nfs
- At first, it creates a service account which the NFS Ganesha will use as an underlying identity within the cluster.
- Then, it exposes the NFS ganesha deployment through a Kubernetes service over the various ports required such as 2049, etc.
- At last, it creates a NFS deployment and refers the above created Persistent volume claim for storage (pvc-demo-nfs).
Step 6: Give appropriate access to the NFS Dynamic Provisioner
We now provide necessary cluster roles and NFS Service account roles as mentioned below-
- Cluster-role: Permission on the persistent volumes, persistentvolumeclaims, storageclasses, events, extensions, services and endpoints.
- Role: All kind of permission on the endpoints within the default namespace.
Have a look at the below code!
Cluster role and cluster role binding:
kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-provisioner-adm rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] - apiGroups: [""] resources: ["services", "endpoints"] verbs: ["get"] - apiGroups: ["extensions"] resources: ["podsecuritypolicies"] resourceNames: ["nfs-provisioner"] verbs: ["use"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-provisioner-adm-bind subjects: - kind: ServiceAccount name: nfs-provisioner-sa namespace: default roleRef: kind: ClusterRole name: nfs-provisioner-adm apiGroup: rbac.authorization.k8s.io
Role and role binding:
kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-provisioner-role rules: - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-provisioner-rb subjects: - kind: ServiceAccount name: nfs-provisioner-sa namespace: default roleRef: kind: Role name: nfs-provisioner-role apiGroup: rbac.authorization.k8s.io
Step 7: Creation of Storage Class with ReadWriteMany mode
As a final step of setup, we create a storage class on top of this NFS provisioner setup. It includes ReadWriteMany as the volume mode which will allow app teams to perform volume read and write from multiple nodes within the cluster.
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: demo-nfs provisioner: demo.com/nfs reclaimPolicy: Retain mountOptions: - vers=4.1
And now any application can create a persistent volume claim directly using the above storage class to provision the volume at runtime.
By this, we have approached the end of this topic. Feel free to comment below, in case you come across any questions. For more such posts related to Kubernetes and Docker, Stay tuned with us.
Till then, Happy Learning!! 🙂