Create Dynamic Persistent Volume Provisioner for Custom/Private Kubernetes Cluster

 Dynamic Persistent Volume Provisioner for Custom/Private Kubernetes Cluster


Overview

Certain applications deployment requires persistent volumes which usually need to be created by the Kube admin (which is you) and claim the volume. But creating a persistent volume for every volume claim is tedious and time-consuming. Here comes dynamic provisioning where users (pods) can claim persistent volume and an nfs-provisioner will dynamically assign volumes to the pod.

The above feature is readily available on most cloud system providers but if you create and manage your own cluster then you need to create an nfs-client-provisioner to solve this problem in your cluster by following this tutorial.


Install the NFS server and Export a directory

You need to have access to an NFS server and export a directory to mount for the nfs-client. Here I will set up nfs-server inside my k8s-master node.

Install NFS server using the commands below -
sudo apt update
sudo apt install nfs-kernel-server
sudo systemctl enable nfs-server
sudo systemctl start nfs-server
sudo systemctl status nfs-server

Create a directory inside the master node as follows
sudo mkdir -p /nfs/kubedata
Change permission as below-
sudo chown -R nobody:nogroup /nfs/kubedata

We have given the read, write and execute privileges to all the contents inside the directory
sudo chmod 777 /nfs/kubedata

Permissions for accessing the NFS server are defined in the /etc/exports file. So open the file using your favorite text editor:
sudo vi /etc/exports

You can provide access to a single client, multiple clients, or specify an entire subnet. In this guide, I have allowed the entire world to have access to the NFS share.

/nfs/kubedata *(rw,sync,no_subtree_check,no_root_squash,no_all_squash,insecure)

After granting access to the preferred client systems, export the NFS share directory and restart the NFS kernel server for the changes to come into effect.

sudo exportfs -rav
sudo systemctl restart nfs-kernel-server
Check the exports 
sudo exportfs -v

Check if the volume can be mounted from the cluster nodes. From any of the other node run the following command -

sudo mount -t nfs <IP_OF_NFS_SERVER>:/nfs/kubedata /mnt
mount | grep kubedata

If any error produced like Error: bad option... you might need a ... helper program. Then you need to install the following nfs-common package (do it for all nodes)
sudo apt install -y nfs-common

Now unmount the directory
sudo umount /mnt

The nfs-server is now configured and ready.

Deploy NFS Client Provisioner

Pull the nfs-client-provisioner inside the k8s master node as below
git clone https://github.com/ssmtariq/nfs-client-provisioner.git

Now change the directory and deploy the rbac.yaml as below-
cd nfs-client-provisioner/yamls
kubectl create -f rbac.yaml

Then deploy the storage class, I prefer to deploy the default-sc.yaml instead of the class.yaml because it has the default storage class annotated. 
kubectl create -f default-sc.yaml

Before you deploy the nfs-client-provisioner, we need to update the following information inside the deployment.yaml
  1. The value of the environment variable NFS_PATH with the directory we exported from the nfs-server configuration which is /nfs/kubedata
  2. The <<NFS Server IP>> with the nfs-server node IP
After that, run the following commands to deploy the nfs-client-provisioner pod - 
kubectl create -f deployment.yaml
Check the status by running the following command-
kubectl get pods


Check the storage class available by -
kubectl get sc

Now test if the provisioner working properly by claiming a persistent volume.

Change directory to nfs-client-provisioner and deploy the test-pvc-nfs.yaml sample file as below
kubectl create -f test-pvc-nfs.yaml

To check the status of the persistent volume and the volume claim, run the following command
kubectl get pv,pvc


Comments