问题
I am currently using Deployments to manage my pods in my K8S cluster.
Some of my deployments require 2 pods/replicas, some require 3 pods/replicas and some of them require just 1 pod/replica. The issue Im having is the one with one pod/replica.
My YAML file is :
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: user-management-backend-deployment
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 2
selector:
matchLabels:
name: user-management-backend
template:
metadata:
labels:
name: user-management-backend
spec:
containers:
- name: user-management-backend
image: proj_csdp/user-management_backend:3.1.8
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
livenessProbe:
httpGet:
port: 8080
path: /user_management/health
initialDelaySeconds: 300
timeoutSeconds: 30
readinessProbe:
httpGet:
port: 8080
path: /user_management/health
initialDelaySeconds: 10
timeoutSeconds: 5
volumeMounts:
- name: nfs
mountPath: "/vault"
volumes:
- name: nfs
nfs:
server: kube-nfs
path: "/kubenfs/vault"
readOnly: true
I have a the old version running fine.
# kubectl get po | grep user-management-backend-deployment
user-management-backend-deployment-3264073543-mrrvl 1/1 Running 0 4d
Now I want to update the image:
# kubectl set image deployment user-management-backend-deployment user-management-backend=proj_csdp/user-management_backend:3.2.0
Now as per RollingUpdate design, K8S should bring up the new pod while keeping the old pod working and only once the new pod is ready to take the traffic, should the old pod get deleted. But what I see is that the old pod is immediately deleted and the new pod is created and then it takes time to start taking traffic meaning that I have to drop traffic.
# kubectl get po | grep user-management-backend-deployment
user-management-backend-deployment-3264073543-l93m9 0/1 ContainerCreating 0 1s
# kubectl get po | grep user-management-backend-deployment
user-management-backend-deployment-3264073543-l93m9 1/1 Running 0 33s
I have used maxSurge: 2
& maxUnavailable: 1
but this does not seem to be working.
Any ideas why is this not working ?
回答1:
It appears to be the maxUnavailable: 1
; I was able to trivially reproduce your experience setting that value, and trivially achieve the correct experience by setting it to maxUnavailable: 0
Here's my "pseudo-proof" of how the scheduler arrived at the behavior you are experiencing:
Because replicas: 1
, the desired state for k8s is exactly one Pod in Ready
. During a Rolling Update operation, which is the strategy you requested, it will create a new Pod, bringing the total to 2. But you granted k8s permission to leave one Pod in an unavailable state, and you instructed it to keep the desired number of Pods at 1. Thus, it fulfilled all of those constraints: 1 Pod, the desired count, in an unavailable state, permitted by the R-U strategy.
By setting the maxUnavailable
to zero, you correctly direct k8s to never let any Pod be unavailable, even if that means surging Pods above the replica
count for a short time
回答2:
As answered already, you can set the maxUnavailable to 0 to achieve the desired result. A couple of extra notes:
You should not expect this to work when using a stateful service that mounts a single specific volume that is to be used by the new pod. The volume will be attached to the soon-to-be-replaced pod, so won't be able to attach to the new pod.
The documentation notes that you cannot set this to 0 if you have set .spec.strategy.rollingUpdate.maxSurge to 0.
https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-unavailable
回答3:
with Strategy Type set to RollingUpdate
a new pod is created before the old one is deleted even with a single replica. Strategy Type Recreate
kills old pods before creating new ones
https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment
来源:https://stackoverflow.com/questions/46369100/kubernetes-rolling-update-killing-off-old-pod-without-bringing-up-new-one