Stop all Pods in a StatefulSet before scaling it up or down

老子叫甜甜 提交于 2021-02-07 07:39:49

问题


My team is currently working on migrating a Discord chat bot to Kubernetes. We plan on using a StatefulSet for the main bot service, as each Shard (pod) should only have a single connection to the Gateway. Whenever a shard connects to said Gateway, it tells it its ID (in our case the pod's ordinal index) and how many shards we are running in total (the amount of replicas in the StatefulSet).

Having to tell the gateway the total number of shards means that in order to scale our StatefulSet up or down we'd have to stop all pods in that StatefulSet before starting new ones with the updated value.

How can I achieve that? Preferrably through configuration so I don't have to run a special command each time.


回答1:


One way of doing this is, Firstly get the YAML configuration of StatefulSets by running the below command and save it in a file:

kubectl get statefulset NAME -o yaml > sts.yaml

And then delete the StatefulSets by running the below command:

kubectl delete -f sts.yaml

And Finally, again create the StatefulSets by using the same configuration file which you got in the first step.

kubectl apply -f sts.yaml

I hope this answers your query to only delete the StatefulSets and to create the new StatefulSets as well.




回答2:


Try kubectl rollout restart sts <sts name> command. It'll restart the pods one by one in a RollingUpdate way.

Scale down the sts kubectl scale --replicas=0 sts <sts name>

Scale up the sts kubectl scale --replicas=<number of replicas> sts <sts name>




回答3:


Before any kubectl scale, since you need more control on your nodes, you might consider a kubectl drain first

When you are ready to put the node back into service, use kubectl uncordon, which will make the node schedulable again.

By draining the node where your pods are maanged, you would stop all pods, with the opportunity to scale the statefulset with the new value.


See also "How to Delete Pods from a Kubernetes Node" by Keilan Jackson

Start at least with kubectl cordon <nodename> to mark the node as unschedulable.

If your pods are controlled by a StatefulSet, first make sure that the pod that will be deleted can be safely deleted.
How you do this depends on the pod and your application’s tolerance for one of the stateful pods to become temporarily unavailable.

For example you might want to demote a MySQL or Redis writer to just a read-only slave, update and release application code to no longer reference the pod in question temporarily, or scale up the ReplicaSet first to handle the extra traffic that may be caused by one pod being unavailable.

Once this is done, delete the pod and wait for its replacement to appear on another node.



来源:https://stackoverflow.com/questions/62066640/stop-all-pods-in-a-statefulset-before-scaling-it-up-or-down

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!