Why does scaling down a deployment seem to always remove the newest pods?

前端 未结 1 783
余生分开走
余生分开走 2021-02-08 12:29

(Before I start, I\'m using minikube v27 on Windows 10.)

I have created a deployment with the nginx \'hello world\' container with a desired count of 2:

相关标签:
1条回答
  • 2021-02-08 13:00

    Pod deletion preference is based on a ordered series of checks, defined in code here:

    https://github.com/kubernetes/kubernetes/blob/release-1.11/pkg/controller/controller_utils.go#L737

    Summarizing- precedence is given to delete pods:

    • that are unassigned to a node, vs assigned to a node
    • that are in pending or not running state, vs running
    • that are in not-ready, vs ready
    • that have been in ready state for fewer seconds
    • that have higher restart counts
    • that have newer vs older creation times

    These checks are not directly configurable.

    Given the rules, if you can make an old pod to be not ready, or cause an old pod to restart, it will be removed at scale down time before a newer pod that is ready and has not restarted.

    There is discussion around use cases for the ability to control deletion priority, which mostly involve workloads that are a mix of job and service, here:

    https://github.com/kubernetes/kubernetes/issues/45509

    0 讨论(0)
提交回复
热议问题