kubernetes-pod

How to get list of pods which are “ready”?

我只是一个虾纸丫 提交于 2020-08-19 16:55:44
问题 I am using kubectl in order to retrieve a list of pods: kubectl get pods --selector=artifact=boot-example -n my-sandbox The results which I am getting are: NAME READY STATUS RESTARTS AGE boot-example-757c4c6d9c-kk7mg 0/1 Running 0 77m boot-example-7dd6cd8d49-d46xs 1/1 Running 0 84m boot-example-7dd6cd8d49-sktf8 1/1 Running 0 88m I would like to get only those pods which are " ready " (passed readinessProbe). Is there any kubectl command which returns only " ready " pods? If not kubectl

How to get list of pods which are “ready”?

拟墨画扇 提交于 2020-08-19 16:52:24
问题 I am using kubectl in order to retrieve a list of pods: kubectl get pods --selector=artifact=boot-example -n my-sandbox The results which I am getting are: NAME READY STATUS RESTARTS AGE boot-example-757c4c6d9c-kk7mg 0/1 Running 0 77m boot-example-7dd6cd8d49-d46xs 1/1 Running 0 84m boot-example-7dd6cd8d49-sktf8 1/1 Running 0 88m I would like to get only those pods which are " ready " (passed readinessProbe). Is there any kubectl command which returns only " ready " pods? If not kubectl

Why does my GKE cluster not show any events?

 ̄綄美尐妖づ 提交于 2020-08-09 08:13:59
问题 I have a GKE cluster and deployed some workloads on it. What I noticed is that whenever I do kubectl get events --all-namespaces , I don't see any results. kubectl describe deployment <name> doesn't show any events either. I'm pretty sure things do happen in my cluster because all my workloads are running fine and Stackdriver is able to report logs and HPA functions perfectly. But my events section is empty all over. Why is this? Is this something I've to enable manually in GKE? 回答1:

Unable to delete all pods in Kubernetes - Clear/restart Kubernetes

試著忘記壹切 提交于 2020-07-22 03:18:20
问题 I am trying to delete/remove all the pods running in my environment. When I issue docker ps I get the below output. This is a sample screenshot. As you can see that they are all K8s. I would like to delete all of the pods/remove them. I tried all the below approaches but they keep appearing again and again sudo kubectl delete --all pods --namespace=default/kube-public #returns "no resources found" for both default and kube-public namespaces sudo kubectl delete --all pods --namespace=kube