Kubernetes trouble with StatefulSet and 3 PersistentVolumes

丶灬走出姿态 提交于 2021-02-08 10:40:34

问题


I'm in the process of creating a StatefulSet based on this yaml, that will have 3 replicas. I want each of the 3 pods to connect to a different PersistentVolume.

For the persistent volume I'm using 3 objects that look like this, with only the name changed (pvvolume, pvvolume2, pvvolume3):

kind: PersistentVolume
apiVersion: v1
metadata:
  name: pvvolume
  labels:
    type: local
spec:
  storageClassName: standard
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/nfs"
  claimRef:
    kind: PersistentVolumeClaim
    namespace: default
    name: mongo-persistent-storage-mongo-0

The first of the 3 pods in the StatefulSet seems to be created without issue.

The second fails with the error pod has unbound PersistentVolumeClaims Back-off restarting failed container.

Yet if I go to the tab showing PersistentVolumeClaims the second one that was created seems to have been successful.

If it was successful why does the pod think it failed?


回答1:


I want each of the 3 pods to connect to a different PersistentVolume.

  • For that to work properly you will either need:

    • provisioner (in link you posted there are example how to set provisioner on aws, azure, googlecloud and minicube) or
    • volume capable of being mounted multiple times (such as nfs volume). Note however that in such a case all your pods read/write to the same folder and this can lead to issues when they are not meant to lock/write to same data concurrently. Usual use case for this is upload folder that pods are saving to, that is later used for reading only and such use cases. SQL Databases (such as mysql) on the other hand, are not meant to write to such shared folder.
  • Instead of either of mentioned requirements in your claim manifest you are using hostPath (pointing to /nfs) and set it to ReadWriteOnce (only one can use it). You are also using 'standard' as storage class and in url you gave there are fast and slow ones, so you probably created your storage class as well.

The second fails with the error pod has unbound PersistentVolumeClaims Back-off restarting failed container

  • That is because first pod already took it's claim (read write once, host path) and second pod can't reuse same one if proper provisioner or access is not set up.

If it was successful why does the pod think it failed?

  • All PVC were successfully bound to accompanying PV. But you are never bounding second and third PVC to second or third pods. You are retrying with first claim on second pod, and first claim is already bound (to fist pod) in ReadWriteOnce mode and can't be bound to second pod as well and you are getting error...

Suggested approach

Since you reference /nfs as your host path, it may be safe to assume that you are using some kind of NFS-backed file system so here is one alternative setup that can get you to mount dynamically provisioned persistent volumes over nfs to as many pods in stateful set as you want

Notes:

  • This only answers original question of mounting persistent volumes across stateful set replicated pods with the assumption of nfs sharing.
  • NFS is not really advisable for dynamic data such as database. Usual use case is upload folder or moderate logging/backing up folder. Database (sql or no sql) is usually a no-no for nfs.
  • For mission/time critical applications you might want to time/stresstest carefully prior to taking this approach in production since both k8s and external pv are adding some layers/latency in-between. Although for some application this might suffice, be warned about it.
  • You have limited control of name for pv that are being dynamically created (k8s adds suffix to newly created, and reuses available old ones if told to do so), but k8s will keep them after pod get terminated and assign first available to new pod so you won't loose state/data. This is something you can control with policies though.

Steps:

  • for this to work you will first need to install nfs provisioner from here:

    • https://github.com/kubernetes-incubator/external-storage/tree/master/nfs. Mind you that installation is not complicated but has some steps where you have to take careful approach (permissions, setting up nfs shares etc) so it is not just fire-and-forget deployment. Take your time installing nfs provisioner correctly. Once this is properly set up you can continue with suggested manifests below:
  • Storage class manifest:

    kind: StorageClass
    apiVersion: storage.k8s.io/v1beta1
    metadata:
      name: sc-nfs-persistent-volume
    # if you changed this during provisioner installation, update also here
    provisioner: example.com/nfs 
    
  • Stateful Set (important excerpt only):

    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: ss-my-app
    spec:
      replicas: 3
      ...
      selector:
        matchLabels:
          app: my-app
          tier: my-mongo-db
      ...
      template:
        metadata:
          labels:
            app: my-app
            tier: my-mongo-db
        spec:
          ...
          containers:
            - image: ...
              ...
              volumeMounts:
                - name: persistent-storage-mount
                  mountPath: /wherever/on/container/you/want/it/mounted
          ...
      ...
      volumeClaimTemplates:
      - metadata:
          name: persistent-storage-mount
      spec:
        storageClassName: sc-nfs-persistent-volume
        accessModes: [ ReadWriteOnce ]
        resources:
          requests:
            storage: 10Gi
      ...
    


来源:https://stackoverflow.com/questions/50237572/kubernetes-trouble-with-statefulset-and-3-persistentvolumes

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!