Kubernetes pod gets recreated when deleted

后端 未结 17 993
清酒与你
清酒与你 2020-12-12 10:25

I have started pods with command

$ kubectl run busybox --image=busybox --restart=Never --tty -i --generator=run-pod/v1

Something went wrong

相关标签:
17条回答
  • 2020-12-12 10:44

    Instead of removing NS you can try removing replicaSet

    kubectl get rs --all-namespaces
    

    Then delete the replicaSet

    kubectl delete rs your_app_name
    
    0 讨论(0)
  • 2020-12-12 10:47

    Instead of trying to figure out whether it is a deployment, deamonset, statefulset... or what (in my case it was a replication controller that kept spanning new pods :) In order to determine what it was that kept spanning up the image I got all the resources with this command:

    kubectl get all

    Of course you could also get all resources from all namespaces:

    kubectl get all --all-namespaces

    or define the namespace you would like to inspect:

    kubectl get all -n NAMESPACE_NAME

    Once I saw that the replication controller was responsible for my trouble I deleted it:

    kubectl delete replicationcontroller/CONTROLLER_NAME

    0 讨论(0)
  • 2020-12-12 10:48

    This will provide information about all the pods,deployments, services and jobs in the namespace.

    kubectl get pods,services, deployments, jobs
    

    pods can either be created by deployments or jobs

    kubectl delete job [job_name]
    kubectl delete deployment [deployment_name]
    

    If you delete the deployment or job then restart of the pods can be stopped.

    0 讨论(0)
  • 2020-12-12 10:49

    Look out for stateful sets as well

    kubectl get sts --all-namespaces
    

    to delete all the stateful sets in a namespace

    kubectl --namespace <yournamespace> delete sts --all
    

    to delete them one by one

    kubectl --namespace ag1 delete sts mssql1 
    kubectl --namespace ag1 delete sts mssql2
    kubectl --namespace ag1 delete sts mssql3
    
    0 讨论(0)
  • 2020-12-12 10:51

    Many answers here tells to delete a specific k8s object, but you can delete multiple objects at once, instead of one by one:

    kubectl delete deployments,jobs,services,pods --all -n <namespace>

    In my case, I'm running OpenShift cluster with OLM - Operator Lifecycle Manager. OLM is the one who controls the deployment, so when I deleted the deployment, it was not sufficient to stop the pods from restarting.

    Only when I deleted OLM and its subscription, the deployment, services and pods were gone.

    First list all k8s objects in your namespace:

    $ kubectl get all -n openshift-submariner
    
    NAME                                       READY   STATUS    RESTARTS   AGE
    pod/submariner-operator-847f545595-jwv27   1/1     Running   0          8d  
    NAME                                  TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
    service/submariner-operator-metrics   ClusterIP   101.34.190.249   <none>        8383/TCP   8d
    NAME                                  READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/submariner-operator   1/1     1            1           8d
    NAME                                             DESIRED   CURRENT   READY   AGE
    replicaset.apps/submariner-operator-847f545595   1         1         1       8d
    

    OLM is not listed with get all, so I search for it specifically:

    $ kubectl get olm -n openshift-submariner
    
    NAME                                                      AGE
    operatorgroup.operators.coreos.com/openshift-submariner   8d
    NAME                                                             DISPLAY      VERSION
    clusterserviceversion.operators.coreos.com/submariner-operator   Submariner   0.0.1 
    

    Now delete all objects, including OLMs, subscriptions, deployments, replica-sets, etc:

    $ kubectl delete olm,svc,rs,rc,subs,deploy,jobs,pods --all -n openshift-submariner
    
    operatorgroup.operators.coreos.com "openshift-submariner" deleted
    clusterserviceversion.operators.coreos.com "submariner-operator" deleted
    deployment.extensions "submariner-operator" deleted
    subscription.operators.coreos.com "submariner" deleted
    service "submariner-operator-metrics" deleted
    replicaset.extensions "submariner-operator-847f545595" deleted
    pod "submariner-operator-847f545595-jwv27" deleted
    

    List objects again - all gone:

    $ kubectl get all -n openshift-submariner
    No resources found.
    
    $ kubectl get olm -n openshift-submariner
    No resources found.
    
    0 讨论(0)
  • 2020-12-12 10:56

    I experienced a similar problem: after deleting the deployment (kubectl delete deploy <name>), the pods kept "Running" and where automatically re-created after deletion (kubectl delete po <name>).

    It turned out that the associated replica set was not deleted automatically for some reason, and after deleting that (kubectl delete rs <name>), it was possible to delete the pods.

    0 讨论(0)
提交回复
热议问题