Kubernetes - Rolling update killing off old pod without bringing up new one

后端 未结 3 2108
醉酒成梦
醉酒成梦 2021-02-20 09:39

I am currently using Deployments to manage my pods in my K8S cluster.

Some of my deployments require 2 pods/replicas, some require 3 pods/replicas and some of them requi

3条回答
  •  梦如初夏
    2021-02-20 10:31

    It appears to be the maxUnavailable: 1; I was able to trivially reproduce your experience setting that value, and trivially achieve the correct experience by setting it to maxUnavailable: 0

    Here's my "pseudo-proof" of how the scheduler arrived at the behavior you are experiencing:

    Because replicas: 1, the desired state for k8s is exactly one Pod in Ready. During a Rolling Update operation, which is the strategy you requested, it will create a new Pod, bringing the total to 2. But you granted k8s permission to leave one Pod in an unavailable state, and you instructed it to keep the desired number of Pods at 1. Thus, it fulfilled all of those constraints: 1 Pod, the desired count, in an unavailable state, permitted by the R-U strategy.

    By setting the maxUnavailable to zero, you correctly direct k8s to never let any Pod be unavailable, even if that means surging Pods above the replica count for a short time

提交回复
热议问题