Kubernetes has a ton of pods in error state that can't seem to be cleared

后端 未结 4 1867
执笔经年
执笔经年 2021-02-13 20:01

I was originally trying to run a Job that seemed to get stuck in a CrashBackoffLoop. Here was the service file:

apiVersion: batch/v1
kind: Job
metadata:
  name:         


        
4条回答
  •  北荒
    北荒 (楼主)
    2021-02-13 20:06

    The solution was as @johnharris85 mentioned in the comment. I had to manually delete all the pods. To do that I ran the following:

    kubectl get pods -w | tee all-pods.txt
    

    That dumped all my pods, then to filter and delete on only what I wanted.

    kubectl delete pod $(more all-pods.txt | grep es-setup-index | awk '{print $1}')
    

    Note: I had about 9292 pods, it took about 1-2 hours to delete them all.

提交回复
热议问题