Kubernetes Pod Warning: 1 node(s) had volume node affinity conflict

六眼飞鱼酱① 提交于 2019-12-29 04:36:11

问题


I try to set up kubernetes cluster. I have Persistent Volomue, Persistent Volume Claim and Storage class all set-up and running but when I wan to create pod from deployment, pod is created but it hangs in Pending state. After describe I get only this warnig "1 node(s) had volume node affinity conflict." Can somebody tell me what I am missing in my volume configuration?

apiVersion: v1
kind: PersistentVolume
metadata:
  creationTimestamp: null
  labels:
    io.kompose.service: mariadb-pv0
  name: mariadb-pv0
spec:
  volumeMode: Filesystem
  storageClassName: local-storage
  local:
    path: "/home/gtcontainer/applications/data/db/mariadb"
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 2Gi
  claimRef:
    namespace: default
    name: mariadb-claim0
  nodeAffinity:
    required:
      nodeSelectorTerms:
        - matchExpressions:
          - key: kubernetes.io/cvl-gtv-42.corp.globaltelemetrics.eu
            operator: In
            values:
            - master

status: {}

回答1:


The error "volume node affinity conflict" happens when the persistent volume claims that the pod is using are scheduled on different zones, rather than on one zone, and so the actual pod was not able to be scheduled because it cannot connect to the volume from another zone. To check this, you can see the details of all the Persistent Volumes. To check that, first get your PVCs:

$ kubectl get pvc -n <namespace>

Then get the details of the Persistent Volumes (not Volume claims)

$  kubectl get pv

Find the PVs, that correspond to your PVCs and describe them

$  kubectl describe pv <pv1> <pv2>

You can check the Source.VolumeID for each of the PV, most likely they will be different availability zone, and so your pod gives the affinity error. To fix this, create a storageclass for a single zone and use that storageclass in your PVC.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: region1storageclass
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
  encrypted: "true" # if encryption required
volumeBindingMode: WaitForFirstConsumer
allowedTopologies:
- matchLabelExpressions:
  - key: failure-domain.beta.kubernetes.io/zone
    values:
    - eu-west-2b # this is the availability zone, will depend on your cloud provider
    # multi-az can be added, but that defeats the purpose in our scenario



回答2:


There a few things that can cause this error:

  1. Node isn’t labeled properly. I had this issue on AWS when my worker node didn’t have appropriate labels(master had them though) like that:

    failure-domain.beta.kubernetes.io/region=us-east-2

    failure-domain.beta.kubernetes.io/zone=us-east-2c

    After patching the node with the labels, the “1 node(s) had volume node affinity conflict” error was gone, so PV, PVC with a pod were deployed successfully. The value of these labels is cloud provider specific. Basically, it is the job of the cloud provider(with —cloud-provider option defined in cube-controller, API-server, kubelet) to set those labels. If appropriate labels aren’t set, then check that your CloudProvider integration is correct. I used kubeadm, so it is cumbersome to set up but with other tools, kops, for instance, it is working right away.

  2. Based on your PV definition and the usage of nodeAffinity field, you are trying to use a local volume, (read here local volume description link, official docs), then make sure that you set "NodeAffinity field" like that(it worked in my case on AWS):

    nodeAffinity:

         required:
          nodeSelectorTerms:
           - matchExpressions:
             - key: kubernetes.io/hostname
               operator: In
               values:
               - my-node  # it must be the name of your node(kubectl get nodes)
    

So that after creating the resource and running describe on it it will show up there like that:

         Required Terms:  
                    Term 0:  kubernetes.io/hostname in [your node name]
  1. StorageClass definition(named local-storage, which is not posted here) must be created with volumeBindingMode set to WaitForFirstConsumer for local storage to work properly. Refer to the example here storage class local description, official doc to understand the reason behind that.



回答3:


The "1 node(s) had volume node affinity conflict" error is created by the scheduler because it can't schedule your pod to a node that conforms with the persistenvolume.spec.nodeAffinity field in your PersistentVolume (PV).

In other words, you say in your PV that a pod using this PV must be scheduled to a node with a label of kubernetes.io/cvl-gtv-42.corp.globaltelemetrics.eu = master, but this isn't possible for some reason.

There may be various reason that your pod can't be scheduled to such a node:

  • The pod has node affinities, pod affinities, etc. that conflict with the target node
  • The target node is tainted
  • The target node has reached its "max pods per node" limit
  • There exists no node with the given label

The place to start looking for the cause is the definition of the node and the pod.




回答4:


almost same problem described here... https://github.com/kubernetes/kubernetes/issues/61620

"If you're using local volumes, and the node crashes, your pod cannot be rescheduled to a different node. It must be scheduled to the same node. That is the caveat of using local storage, your Pod becomes bound forever to one specific node."




回答5:


In my case, the root cause was that the persistent volume are in us-west-2c and the new worker nodes are relaunched to be in us-west-2a and us-west-2b. The solution is to either have more worker nodes so they are in more zones, or remove / widen node affinity for the application so that more worker nodes qualifies to be bounded to the persistent volume.



来源:https://stackoverflow.com/questions/51946393/kubernetes-pod-warning-1-nodes-had-volume-node-affinity-conflict

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!