After reading this post you will be understanding the purpose of Kubernetes HostPath volume, and how we can implement it in our environment
If you have not yet checked the previous parts of this series, please go ahead and check this π Link
Kubernetes Hostpath
Hostpath is one of the supported volume types in the Kubernetes Cluster, it is a file or directory from the nodes file system into the pod. Hostpath will mount a directory, which is present on the node and mounted inside the container
A hostPath Persistent Volume must be used only in a single-node cluster
Kubernetes will support hostpath in the multimode cluster, if you create a deployment set using the hostpath the container will create and start but the data will not sync in each container running in multiple hosts
A host path is a type of persistent volume this will mount a file or directory from the host nodeβs filesystem into your pod.
Advantages We can use the node-local file system as persistent volume storage. It is very useful for testing or development purposes. It can be referenced by Persistent Volume Claim or we can directly use it in the pod YAML file.
HostPath Configuration Steps
- Creating the directory on a particular Host
- PV Creation (Persistent Volume )
- PVC Creation (Persistent volume claim)
- Creating the pod with specified PVC
Login to the Worker node and create the directory
controlplane $ ssh node01
Last login: Fri Jul 29 11:10:25 2022 from 10.244.6.167
node01 $ mkdir /mnt/data/
node01 $
PV creation
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
Creating the PV
controlplane $ kubectl create -f pv.yaml
persistentvolume/task-pv-volume created
PV list
controlplane $ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
task-pv-volume 10Gi RWO Retain Available manual 28s
controlplane $
PVC Creation
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Ceating the PVC
controlplane $ kubectl create -f pvc.yaml
persistentvolumeclaim/task-pv-claim created
controlplane $ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
task-pv-claim Bound task-pv-volume 10Gi RWO manual 25s
POD creation
piVersion: v1
kind: Pod
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: httpd
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/var/www/html"
name: task-pv-storage
controlplane $ kubectl create -f pod.yaml
pod/task-pv-pod created
controlplane $ kubectl get pod
NAME READY STATUS RESTARTS AGE
task-pv-pod 1/1 Running 0 7s
Testing
controlplane $ kubectl exec -it task-pv-pod bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@task-pv-pod:/usr/local/apache2# cd /var/www/html/
root@task-pv-pod:/var/www/html#
root@task-pv-pod:/var/www/html# touch index.html
root@task-pv-pod:/var/www/html# cat > index.html
Welcome
^C
root@task-pv-pod:/var/www/html#
Login to the worker node and verify the data
controlplane $ ssh node01
Last login: Fri Oct 8 17:04:36 2021 from 10.32.0.22
node01 $ cd /mnt/data/
node01 $ cat index.html
Welcome
node01 $
Pod creation in multiple node
piVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: test
name: test
spec:
replicas: 3
selector:
matchLabels:
app: test
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: test
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: httpd
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/var/www/html"
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test-9b85b8fbc-q6p9h 1/1 Running 0 14m 192.168.1.5 node01 <none> <none>
test-9b85b8fbc-rvn2c 1/1 Running 0 14m 192.168.1.4 node01 <none> <none>
test-9b85b8fbc-zlbb4 1/1 Running 0 14m 192.168.0.6 controlplane <none> <none>
login to the first node it is running in the node01 and create one file
controlplane $ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test-9b85b8fbc-q6p9h 1/1 Running 0 15m 192.168.1.5 node01 <none> <none>
test-9b85b8fbc-rvn2c 1/1 Running 0 15m 192.168.1.4 node01 <none> <none>
test-9b85b8fbc-zlbb4 1/1 Running 0 15m 192.168.0.6 controlplane <none> <none>
controlplane $ kubectl exec -it test-9b85b8fbc-q6p9h bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@test-9b85b8fbc-q6p9h:/usr/local/apache2# cd /var/www/html/
root@test-9b85b8fbc-q6p9h:/var/www/html# ls
root@test-9b85b8fbc-q6p9h:/var/www/html# touch inde.html
root@test-9b85b8fbc-q6p9h:/var/www/html# exit
login to the container which is running in the control plane and check the data which created the index.html file
controlplane $ kubectl exec -it test-9b85b8fbc-zlbb4 bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@test-9b85b8fbc-zlbb4:/usr/local/apache2# cd /var/www/html/
root@test-9b85b8fbc-zlbb4:/var/www/html# ls
root@test-9b85b8fbc-zlbb4:/var/www/html#
index.html file is not available inside the container which is running in the control plane node
conclusion: if you want the same data in each container within the deployment set we need a common disk space
Example: NFS,ceph storage, etc..
Hope you have got an idea about Kubernetes Hostpath and how to mount the HostPath inside the Pod
Happy Learning π
Thank you!