What happens when you drain nodes in a Kubernetes cluster?

泄露秘密 提交于 2021-02-07 08:52:33

问题


I'd like to get some clarification for preparation for maintenance when you drain nodes in a Kubernetes cluster:

Here's what I know when you run kubectl drain MY_NODE:

  • Node is cordoned
  • Pods are gracefully shut down
  • You can opt to ignore Daemonset pods because if they are shut down, they'll just be re-spawned right away again.

I'm confused as to what happens when a node is drained though.

Questions:

  • What happens to the pods? As far as I know, there's no 'live migration' of pods in Kubernetes.
  • Will the pod be shut down and then automatically started on another node? Or does this depend on my configuration? (i.e. could a pod be shut down via drain and not start up on another node)

I would appreciate some clarification on this and any best practices or advice as well. Thanks in advance.


回答1:


I just want to add a few things to eamon1234's answer:

You may find this useful as well:

  1. Link to official docummentation (in case default flags change etc.). According to it:

    The 'drain' evicts or deletes all pods except mirror pods (which cannot be deleted through the API server). If there are DaemonSet-managed pods, drain will not proceed without --ignore-daemonsets, and regardless it will not delete any DaemonSet-managed pods, because those pods would be immediately replaced by the DaemonSet controller, which ignores unschedulable markings. If there are any pods that are neither mirror pods nor managed by ReplicationController, ReplicaSet, DaemonSet, StatefulSet or Job, then drain will not delete any pods unless you use --force. --force will also allow deletion to proceed if the managing resource of one or more pods is missing.

  2. Simple chart illustrating what actually happens when using kubectl drain.

  3. Using kubectl drain with --dry-run option may be also a good idea so you can see its outcome before any actual changes are applied e.g.:

    kubectl drain foo --force --dry-run

    however it will not show any errors about existing local data or daemonsets which you can see whithout using --dry-run flag: ... error: cannot delete DaemonSet-managed Pods (use --ignore-daemonsets to ignore) ...




回答2:


By default kubectl drain is non-destructive, you have to override to change that behaviour. It runs with the following defaults:

  --delete-local-data=false
  --force=false
  --grace-period=-1
  --ignore-daemonsets=false
  --timeout=0s

Each of these safeguard deals with a different category of potential destruction (local data, bare pods, graceful termination, daemonsets). It also respects pod disruption budgets to adhere to workload availability. Any non-bare pod will be recreated on a new node by its respective controller (e.g. daemonset controller, replication controller).

It's up to you whether you want to override that behaviour (for example you might have a bare pod if running jenkins job. If you override by setting --force=true it will delete that pod and it won't be recreated). If you don't override it, the node will be in drain mode indefinitely (--timeout=0s)).




回答3:


We can use kubectl drain to safely evict all of our pods from a node before we perform maintenance on the node.

If you want to update or patch or any kind of maintenance on Hardware/Node you should first drain all the pods(Migrate pods one node to another) kubectl drain

When kubectl drain returns successfully, that indicates that all of the pods have been safely evicted. It is then safe to bring down the node

After maintenance work we can use kubectl uncordon to tell Kubernetes that it can resume scheduling new pods onto the node.



来源:https://stackoverflow.com/questions/56861796/what-happens-when-you-drain-nodes-in-a-kubernetes-cluster

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!