Managing Data and Volumes in Kubernetes

Managing Data and Volumes in Kubernetes

In this post, I am going to discuss about the volumes in Kubernetes.

Comparison

Kubernetes Volumes Docker Volumes
Supports many different drivers and types No Driver or Type support
Volumes are not necessarily persistent Volumes persist until manually cleared
Volumes can survive Container restarts and removals Volumes can survive Container restarts and removals

Read about different types of volumes in Kubernetes

Using an emptyDir Volume Type

emptyDir volume type survives container restarts but if a pod is destroyed and a new one is created, it will have a new emptyDir volume. It basically creates a new empty directory for each new Pod.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: story-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: story
  template:
    metadata:
      labels:
        app: story
    spec:
      containers:
        - name: story
          image: vighnesh153/kub-data-demo:1
          volumeMounts:
            - mountPath: /app/story
              name:  story-volume
      volumes:
        - name: story-volume
          emptyDir:  {}

Using hostPath Volume Type

This volume type binds the volume to a path on the host machine, ie. a Worker Node. All the pods running on this worker node will be able to share the data because of this and new Pods on the same Worker node can also access it.

volumes:
  - name: my-volume
    hostPath:
      path: /vighnesh153-data
      type: DirectoryOrCreate
      # Ensures the path is directory. If not, 
      # creates a directory at that path for us.

Defining Persistent Volume

Created in the same cluster but is separate from the Worker and Master Nodes.

apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-persistent-volume
spec:
  capacity:
    storage: 1Gi
  volumeMode: Filesystem
  accessModes: 
    - ReadWriteOnce
  hostPath:
    path: /vighnesh153-data
    type: DirectoryOrCreate

I know that hostPath only creates volumes dependant on Worker Nodes. But the idea for defining the Volume is same. No matter if we use some other Persistent Storage Volume type.

Creating Persistent Volume Claim

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-persistent-volume-claim
spec:
  volumeName: my-persistent-volume
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

Using Persistent Volume Claim in the deployments

apiVersion: apps/v1
kind: Deployment
metadata:
  name: story-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: story
  template:
    metadata:
      labels:
        app: story
    spec:
      containers:
        - name: story
          image: vighnesh153/kub-data-demo:1
          volumeMounts:
            - mountPath: /app/story
              name:  story-volume
      volumes:
        - name: story-volume
          persistentVolumeClaim:
            claimName: my-persistent-volume-claim

Creating Config Maps

Create a yaml file, named anything. Eg. environment.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: my-data-store
data:
  myFirstName: Vighnesh
  myFavIni: DVR

Using config map in deployments:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: story-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: story
  template:
    metadata:
      labels:
        app: story
    spec:
      containers:
        - name: story
          image: vighnesh153/kub-data-demo:1
          env:
            - name: LAST_NAME
              value: Raut
            - name: FIRST_NAME
              valueFrom:
                configMapKeyRef:
                  name: my-data-store
                  key: myFirstName
          volumeMounts:
            - mountPath: /app/story
              name:  story-volume
      volumes:
        - name: story-volume
          persistentVolumeClaim:
            claimName: my-persistent-volume-claim

Comments