Kubernetes pod gets recreated when deleted

后端 未结 17 992
清酒与你
清酒与你 2020-12-12 10:25

I have started pods with command

$ kubectl run busybox --image=busybox --restart=Never --tty -i --generator=run-pod/v1

Something went wrong

相关标签:
17条回答
  • 2020-12-12 10:39

    You can do kubectl get replicasets check for old deployment based on age or time

    Delete old deployment based on time if you want to delete same current running pod of application

    kubectl delete replicasets <Name of replicaset>
    
    0 讨论(0)
  • 2020-12-12 10:39

    I also faced the issue, I have used below command to delete deployment.

    kubectl delete deployments DEPLOYMENT_NAME
    

    but still pods was recreating, So I crossed check the Replica Set by using below command

    kubectl get rs
    

    then edit the replicaset to 1 to 0

    kubectl edit rs REPICASET_NAME
    
    0 讨论(0)
  • 2020-12-12 10:43

    In some cases the pods will still not go away even when deleting the deployment. In that case to force delete them you can run the below command.

    kubectl delete pods podname --grace-period=0 --force

    0 讨论(0)
  • 2020-12-12 10:43

    After taking an interactive tutorial I ended up with a bunch of pods, services, deployments:

    me@pooh ~ > kubectl get pods,services
    NAME                                       READY   STATUS    RESTARTS   AGE
    pod/kubernetes-bootcamp-5c69669756-lzft5   1/1     Running   0          43s
    pod/kubernetes-bootcamp-5c69669756-n947m   1/1     Running   0          43s
    pod/kubernetes-bootcamp-5c69669756-s2jhl   1/1     Running   0          43s
    pod/kubernetes-bootcamp-5c69669756-v8vd4   1/1     Running   0          43s
    
    NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
    service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   37s
    me@pooh ~ > kubectl get deployments --all-namespaces
    NAMESPACE     NAME                  DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    default       kubernetes-bootcamp   4         4         4            4           1h
    docker        compose               1         1         1            1           1d
    docker        compose-api           1         1         1            1           1d
    kube-system   kube-dns              1         1         1            1           1d
    

    To clean up everything, delete --all worked fine:

    me@pooh ~ > kubectl delete pods,services,deployments --all
    pod "kubernetes-bootcamp-5c69669756-lzft5" deleted
    pod "kubernetes-bootcamp-5c69669756-n947m" deleted
    pod "kubernetes-bootcamp-5c69669756-s2jhl" deleted
    pod "kubernetes-bootcamp-5c69669756-v8vd4" deleted
    service "kubernetes" deleted
    deployment.extensions "kubernetes-bootcamp" deleted
    

    That left me with (what I think is) an empty Kubernetes cluster:

    me@pooh ~ > kubectl get pods,services,deployments
    NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
    service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   8m
    
    0 讨论(0)
  • 2020-12-12 10:43

    In my case I deployed via a YAML file like kubectl apply -f deployment.yaml and the solution appears to be to delete via kubectl delete -f deployment.yaml

    0 讨论(0)
  • 2020-12-12 10:44

    You need to delete the deployment, which should in turn delete the pods and the replica sets https://github.com/kubernetes/kubernetes/issues/24137

    To list all deployments:

    kubectl get deployments --all-namespaces
    

    Then to delete the deployment:

    kubectl delete -n NAMESPACE deployment DEPLOYMENT
    

    Where NAMESPACE is the namespace it's in, and DEPLOYMENT is the name of the deployment.

    In some cases it could also be running due to a job or daemonset. Check the following and run their appropriate delete command.

    kubectl get jobs
    
    kubectl get daemonsets.app --all-namespaces
    
    kubectl get daemonsets.extensions --all-namespaces
    
    0 讨论(0)
提交回复
热议问题