How to recycle pods in Kubernetes

和自甴很熟 提交于 2019-12-04 13:20:28

You should be managing your Pods via a higher-level controller like a Deployment or a StatefulSet. If you do, and you change any detail of the embedded pod spec, the Deployment/StatefulSet/... will restart all of your pods for you. Probably the most minimal way to do this is to add an annotation to the pod spec that says when it was last deployed:

apiVersion: apps/v1
kind: Deployment
spec:
  template:
    spec:
      annotations:
        deployed-at: 20181222

There is probably a kubectl patch one-liner to do this; if you're using a deployment manager like then you can just pass in the current date as a "value" (configuration field) and have it injected for you.

If you want to think bigger, though: the various base images routinely have security updates and minor bug fixes, and if you docker pull ubuntu:18.04 once a month or so you'll get these updates. If you actively know you want to restart your pods every month anyways, and you have a good CI/CD pipeline set up, consider setting up a scheduled job in your Jenkins or whatever that rebuilds and redeploys everything, even if there are no changes in the underlying source tree. That will cause the image: to get updated, which will cause all of the pods to be destroyed and recreated, and you'll always be reasonably up-to-date on security updates.

As the OP rayhan has found out, and as commented in kubernetes/kubernetes issue 13488, a kubectl patch of an environment variable is enough.

But... K8s 1.15 will bring kubectl rollout restart... that is when PR 77423 is accepted and merged.

kubectl rollout restart now works for daemonsets and statefulsets.

You never recycle pods manually , that is a clear anti-pattern of using kuberentes.

Options:

  • Use the declrative format with kubectl apply -f --prune

  • Use a CI/CD tool like Gitlab or Spinakar

  • Use Ksonnet

  • Use Knative

  • Write your own CI/CD tool that automates it

If you need to manually restart Pods manually you could run

'kubectl get pods|grep somename|awk '{print $1}' | xargs -i sh -c 'kubectl delete pod -o name {} && sleep 4'

on a timer-based job (e.g. from your CI system) as suggested by KIVagant in https://github.com/kubernetes/kubernetes/issues/13488#issuecomment-372456851

That GitHub thread reveals there is currently no single best approach and people are suggesting different things. I mention that one as it is closest to your suggestion and is a simple solution for if you do have to do it. What is generally agreed is that you should try to avoid restart jobs and use probes to ensure unhealthy pods are automatically restarted.

Periodic upgrades (as opposed to restarts) are perfectly good to do, especially as rolling upgrades. But if you do this then be careful that all the upgrading doesn't mask problems. If you have Pods with memory leaks or that exhaust connection pools when left running for long periods then you want to try to get the unhealthy Pods to report themselves as unhealthy - both because they can be automatically restarted and because it will help you monitor for code problems and address them.

So far I have found that the following one line command works fine for my purpose. I'm running it from Jenkins after a successful build.

kubectl patch deployment {deployment_name} -p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!