I have started pods with command
$ kubectl run busybox --image=busybox --restart=Never --tty -i --generator=run-pod/v1
Something went wrong
With deployments that have stateful sets (or services, jobs, etc.) you can use this command:
This command terminates anything that runs in the specified <NAMESPACE>
kubectl -n <NAMESPACE> delete replicasets,deployments,jobs,service,pods,statefulsets --all
And forceful
kubectl -n <NAMESPACE> delete replicasets,deployments,jobs,service,pods,statefulsets --all --cascade=true --grace-period=0 --force
if your pod has name like name-xxx-yyy
, it could be controlled by a replicasets.apps named name-xxx
, you should delete that replicaset first before deleting the pod
kubectl delete replicasets.apps name-xxx
When the pod is recreating automatically even after the deletion of the pod manually, then those pods have been created using the Deployment. When you create a deployment, it automatically creates ReplicaSet and Pods. Depending upon how many replicas of your pod you mentioned in the deployment script, it will create those number of pods initially. When you try to delete any pod manually, it will automatically create those pod again.
Yes, sometimes you need to delete the pods with force. But in this case force command doesn’t work.
If you have a job that continues running, you need to search the job and delete it:
kubectl get job --all-namespaces | grep <name>
and
kubectl delete job <job-name>
The root cause for the question asked was the deployment/job/replicasets spec attribute strategy->type
which defines what should happen when the pod will be destroyed (either implicitly or explicitly). In my case, it was Recreate
.
As per @nomad's answer, deleting the deployment/job/replicasets is the simple fix to avoid experimenting with deadly combos before messing up the cluster as a novice user.
Try the following commands to understand the behind the scene actions before jumping into debugging :
kubectl get all -A -o name
kubectl get events -A | grep <pod-name>