I was originally trying to run a Job that seemed to get stuck in a CrashBackoffLoop. Here was the service file:
apiVersion: batch/v1
kind: Job
metadata:
name:
The solution was as @johnharris85 mentioned in the comment. I had to manually delete all the pods. To do that I ran the following:
kubectl get pods -w | tee all-pods.txt
That dumped all my pods, then to filter and delete on only what I wanted.
kubectl delete pod $(more all-pods.txt | grep es-setup-index | awk '{print $1}')
Note: I had about 9292 pods, it took about 1-2 hours to delete them all.