Kubernetes Pod Warning: 1 node(s) had volume node affinity conflict

前端 未结 7 3791
一向
一向 2020-12-14 14:24

I try to set up Kubernetes cluster. I have Persistent Volume, Persistent Volume Claim and Storage class all set-up and running but when I wan to create pod from deployment,

相关标签:
7条回答
  • 2020-12-14 14:47

    In my case, the root cause was that the persistent volume are in us-west-2c and the new worker nodes are relaunched to be in us-west-2a and us-west-2b. The solution is to either have more worker nodes so they are in more zones, or remove / widen node affinity for the application so that more worker nodes qualifies to be bounded to the persistent volume.

    0 讨论(0)
  • 2020-12-14 14:49

    almost same problem described here... https://github.com/kubernetes/kubernetes/issues/61620

    "If you're using local volumes, and the node crashes, your pod cannot be rescheduled to a different node. It must be scheduled to the same node. That is the caveat of using local storage, your Pod becomes bound forever to one specific node."

    0 讨论(0)
  • 2020-12-14 14:51

    There a few things that can cause this error:

    1. Node isn’t labeled properly. I had this issue on AWS when my worker node didn’t have appropriate labels(master had them though) like that:

      failure-domain.beta.kubernetes.io/region=us-east-2

      failure-domain.beta.kubernetes.io/zone=us-east-2c

      After patching the node with the labels, the “1 node(s) had volume node affinity conflict” error was gone, so PV, PVC with a pod were deployed successfully. The value of these labels is cloud provider specific. Basically, it is the job of the cloud provider(with —cloud-provider option defined in cube-controller, API-server, kubelet) to set those labels. If appropriate labels aren’t set, then check that your CloudProvider integration is correct. I used kubeadm, so it is cumbersome to set up but with other tools, kops, for instance, it is working right away.

    2. Based on your PV definition and the usage of nodeAffinity field, you are trying to use a local volume, (read here local volume description link, official docs), then make sure that you set "NodeAffinity field" like that(it worked in my case on AWS):

      nodeAffinity:

           required:
            nodeSelectorTerms:
             - matchExpressions:
               - key: kubernetes.io/hostname
                 operator: In
                 values:
                 - my-node  # it must be the name of your node(kubectl get nodes)
      

    So that after creating the resource and running describe on it it will show up there like that:

             Required Terms:  
                        Term 0:  kubernetes.io/hostname in [your node name]
    
    1. StorageClass definition(named local-storage, which is not posted here) must be created with volumeBindingMode set to WaitForFirstConsumer for local storage to work properly. Refer to the example here storage class local description, official doc to understand the reason behind that.
    0 讨论(0)
  • 2020-12-14 14:52

    Different case from GCP GKE. Assume that you are using regional cluster and you created two PVC. Both were created in different zones (you didn't notice).

    In next step you are trying to run the pod which will have mounted both PVC to the same pod. You have to schedule that pod to specific node in specific zone but because your volumes are on different zones the k8s won't be able to schedule that and you will receive the following problem.

    For example - two simple PVC(s) on the regional cluster (nodes in different zones):

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: disk-a
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 10Gi
    ---
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: disk-b
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 10Gi
    

    Next simple pod:

    apiVersion: v1
    kind: Pod
    metadata:
      name: debug
    spec:
      containers:
        - name: debug
          image: pnowy/docker-tools:latest
          command: [ "sleep" ]
          args: [ "infinity" ]
          volumeMounts:
            - name: disk-a
              mountPath: /disk-a
            - name: disk-b
              mountPath: /disk-b
      volumes:
        - name: disk-a
          persistentVolumeClaim:
            claimName: disk-a
        - name: disk-b
          persistentVolumeClaim:
            claimName: disk-b
    

    Finally as a result it could happen that k8s won't be able schedule to pod because the volumes are on different zones.

    0 讨论(0)
  • 2020-12-14 15:03

    The "1 node(s) had volume node affinity conflict" error is created by the scheduler because it can't schedule your pod to a node that conforms with the persistenvolume.spec.nodeAffinity field in your PersistentVolume (PV).

    In other words, you say in your PV that a pod using this PV must be scheduled to a node with a label of kubernetes.io/cvl-gtv-42.corp.globaltelemetrics.eu = master, but this isn't possible for some reason.

    There may be various reason that your pod can't be scheduled to such a node:

    • The pod has node affinities, pod affinities, etc. that conflict with the target node
    • The target node is tainted
    • The target node has reached its "max pods per node" limit
    • There exists no node with the given label

    The place to start looking for the cause is the definition of the node and the pod.

    0 讨论(0)
  • 2020-12-14 15:09

    Great answer by Sownak Roy. I've had the same case of a PV being created in a different zone compared to the node that was supposed to use it. The solution I applied was based on Sownak's answer only in my case it was enough to specify the storage class without the "allowedTopologies" list, like this:

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: cloud-ssd
    provisioner: kubernetes.io/aws-ebs
    parameters:
      type: gp2
    volumeBindingMode: WaitForFirstConsumer
    
    0 讨论(0)
提交回复
热议问题