I was originally trying to run a Job that seemed to get stuck in a CrashBackoffLoop. Here was the service file:
apiVersion: batch/v1
kind: Job
metadata:
name:
The solution was as @johnharris85 mentioned in the comment. I had to manually delete all the pods. To do that I ran the following:
kubectl get pods -w | tee all-pods.txt
That dumped all my pods, then to filter and delete on only what I wanted.
kubectl delete pod $(more all-pods.txt | grep es-setup-index | awk '{print $1}')
Note: I had about 9292 pods, it took about 1-2 hours to delete them all.
I usually remove all the Error
pods with this command.
kubectl delete pod `kubectl get pods --namespace <yournamespace> | awk '$3 == "Error" {print $1}'` --namespace <yournamespace>
kubectl delete pods --field-selector status.phase=Failed -n <your-namespace>
...cleans up any failed pods in your-namespace.
Here you are a quick way to fix it :)
kubectl get pods | grep Error | cut -d' ' -f 1 | xargs kubectl delete pod
Edit: Add flag -a
if you are using an old version of k8s