HostPath assign persistentVolume to the specific work node in cluster

我怕爱的太早我们不能终老 提交于 2021-01-05 08:40:06

问题


Using kubeadm to create a cluster, I have a master and work node.

Now I want to share a persistentVolume in the work node, which will be bound with Postgres pod.

Expecting the code will create persistentVolume in the path /postgres of work node, but it seems the hostPath will not work in a cluster, how should I assign this property to the specific node?

kind: PersistentVolume
apiVersion: v1
metadata:
  name: pv-postgres
  labels:
    type: local
spec:
  capacity:
    storage: 2Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/postgres"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc-postgres
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres
spec:
  selector:
    matchLabels:
      app: postgres
  replicas: 1
  strategy: {}
  template:
    metadata:
      labels:
        app: postgres
    spec:
      dnsPolicy: ClusterFirstWithHostNet
      hostNetwork: true
      volumes:
      - name: vol-postgres
        persistentVolumeClaim:
          claimName: pvc-postgres
      containers:
      - name: postgres
        image: postgres:12
        imagePullPolicy: Always
        env:
        - name: DB_USER
          value: postgres
        - name: DB_PASS
          value: postgres
        - name: DB_NAME
          value: postgres
        ports:
        - name: postgres
          containerPort: 5432
        volumeMounts:
        - mountPath: "/postgres"
          name: vol-postgres
        livenessProbe:
          exec:
            command:
            - pg_isready
            - -h
            - localhost
            - -U
            - postgres
          initialDelaySeconds: 30
          timeoutSeconds: 5
        readinessProbe:
          exec:
            command:
            - pg_isready
            - -h
            - localhost
            - -U
            - postgres
          initialDelaySeconds: 5
          timeoutSeconds: 1
---
apiVersion: v1
kind: Service
metadata:
  name: postgres
spec:
  ports:
  - name: postgres
    port: 5432
    targetPort: postgres
  selector:
    app: postgres

回答1:


As per docs.

A hostPath volume mounts a file or directory from the host node’s filesystem into your Pod. This is not something that most Pods will need, but it offers a powerful escape hatch for some applications.

In short, hostPath type refers to node (machine or VM) resource, where you will schedule pod. It mean that you already need to have this folder on this node. To assign resources to specify node you have to use nodeSelector in your Deployment, PV.

Depends on the scenario, using hostPath is not the best idea, however I will provide below example YAMLs which might show you concept. Based on your YAMLs but with nginx image.

kind: PersistentVolume
apiVersion: v1
metadata:
  name: pv-postgres
spec:
  capacity:
    storage: 2Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/tmp/postgres" ## this folder need exist on your node. Keep in minds also who have permissions to folder. Used tmp as it have 3x rwx
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - ubuntu18-kubeadm-worker1    

---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc-postgres
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres
spec:
  selector:
    matchLabels:
      app: postgres
  replicas: 1
  strategy: {}
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
      - image: nginx
        name: nginx      
        volumeMounts:
        - mountPath: /home    ## path to folder inside container
          name: vol-postgres
      affinity:               ## specified affinity to schedule all pods on this specific node with name ubuntu18-kubeadm-worker1
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/hostname
                operator: In
                values:
                - ubuntu18-kubeadm-worker1  
      dnsPolicy: ClusterFirstWithHostNet
      hostNetwork: true
      volumes:
      - name: vol-postgres
        persistentVolumeClaim:
          claimName: pvc-postgres

persistentvolume/pv-postgres created
persistentvolumeclaim/pvc-postgres created
deployment.apps/postgres created

Unfortunately PV is bounded to PVC in 1:1 relationship, so for each time, you would need to create PV and PVC.

However if you are using hostPath it's enough to specify nodeAffinity, volumeMounts and volumes in Deployment YAML without PV and PVC.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres
spec:
  selector:
    matchLabels:
      app: postgres
  replicas: 1
  strategy: {}
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
      - image: nginx:latest
        name: nginx      
        volumeMounts:
        - mountPath: /home    
          name: vol-postgres
      affinity:               
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/hostname
                operator: In
                values:
                - ubuntu18-kubeadm-worker1  
      dnsPolicy: ClusterFirstWithHostNet
      hostNetwork: true
      volumes:
      - name: vol-postgres
        hostPath:
          path: /tmp/postgres

deployment.apps/postgres created

user@ubuntu18-kubeadm-master:~$ kubectl get pods
NAME                        READY   STATUS    RESTARTS   AGE
postgres-77bc9c4566-jgxqq   1/1     Running   0          9s
user@ubuntu18-kubeadm-master:~$ kk exec -ti postgres-77bc9c4566-jgxqq /bin/bash
root@ubuntu18-kubeadm-worker1:/# cd home
root@ubuntu18-kubeadm-worker1:/home# ls
test.txt  txt.txt



回答2:


There are ways to achieve it. You can mount your volume into a NAS or create a storage cluster using disks and create a persistent volume and persistent volume claim for that. If you use-case is to have persistence in local storage then you can create a local-storage storageclass in one of your cluster nodes and that volume space can be used by any pod in your cluster. To create a local-storage storageclass, refer this (https://kubernetes.io/blog/2019/04/04/kubernetes-1.14-local-persistent-volumes-ga/)



来源:https://stackoverflow.com/questions/60247100/hostpath-assign-persistentvolume-to-the-specific-work-node-in-cluster

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!