My helm chart has some 12 PODS. When I did helm upgrade
after changing some values all the PODs are restarted except for one.
My question is
As far as I am concerned helm restart only the pods which are affected by upgrade
If You want to restart ALL pods you can use --recreate-pods flag
--recreate-pods -> performs pods restart for the resource if applicable
For example if You have dashboard chart, You can use this command to restart every pod.
helm upgrade --recreate-pods -i k8s-dashboard stable/k8s-dashboard
There is a github issue which provide another workaround for it
Every time you need to restart the pods, change the value of that annotation. A good annotation could be timestamp
First, add an annotation to the pod. If your chart is of kind Deployment, add annotation to spec.template.metadata.annotations. For example:
kind: Deployment
spec:
template:
metadata:
labels:
app: ecf-helm-satellite-qa
annotations:
timestamp: "{{ .Values.timestamp }}"
Deploy that. Now, every time you set timestamp in helm command. Kubernetes will rollout a new update without downtime.
helm upgrade ecf-helm-satellite-qa . --set-string timestamp=a_random_value