kubernetes-pod

Unable to delete all pods in Kubernetes - Clear/restart Kubernetes

自闭症网瘾萝莉.ら 提交于 2020-07-22 03:13:45
问题 I am trying to delete/remove all the pods running in my environment. When I issue docker ps I get the below output. This is a sample screenshot. As you can see that they are all K8s. I would like to delete all of the pods/remove them. I tried all the below approaches but they keep appearing again and again sudo kubectl delete --all pods --namespace=default/kube-public #returns "no resources found" for both default and kube-public namespaces sudo kubectl delete --all pods --namespace=kube

Unable to delete all pods in Kubernetes - Clear/restart Kubernetes

◇◆丶佛笑我妖孽 提交于 2020-07-22 03:13:31
问题 I am trying to delete/remove all the pods running in my environment. When I issue docker ps I get the below output. This is a sample screenshot. As you can see that they are all K8s. I would like to delete all of the pods/remove them. I tried all the below approaches but they keep appearing again and again sudo kubectl delete --all pods --namespace=default/kube-public #returns "no resources found" for both default and kube-public namespaces sudo kubectl delete --all pods --namespace=kube

Are the container in a kubernetes pod part of same cgroup?

a 夏天 提交于 2020-07-18 11:50:09
问题 In a multi-container Kubernetes pod, are the containers part of the same cgroup (along with pod) or a separate cgroup is created for each container. 回答1: Cgroups Container in a pod share part of cgroup hierarchy but each container get's it's own cgroup. We can try this out and verify ourself. Start a multi container pod. # cat mc2.yaml apiVersion: v1 kind: Pod metadata: name: two-containers spec: restartPolicy: Never containers: - name: container1 image: ubuntu command: [ "/bin/bash", "-c", "

Kubernetes mount volume on existing directory with files inside the container

倾然丶 夕夏残阳落幕 提交于 2020-07-02 12:10:37
问题 I am using k8s with version 1.11 and CephFS as storage. I am trying to mount the directory created on the CephFS in the pod. To achieve the same I have written the following volume and volume mount config in the deployment configuration Volume { "name": "cephfs-0", "cephfs": { "monitors": [ "10.0.1.165:6789", "10.0.1.103:6789", "10.0.1.222:6789" ], "user": "cfs", "secretRef": { "name": "ceph-secret" }, "readOnly": false, "path": "/cfs/data/conf" } } volumeMounts { "mountPath": "/opt

cannot access embedded ActiveMq within kubernetes cluster

℡╲_俬逩灬. 提交于 2020-06-29 03:53:07
问题 We are starting an embedded activeMq server in our java application. This will run in a kubernetes pod. broker = BrokerFactory.createBroker("broker:(tcp://localhost:41415)?persistent=false"); broker.setBrokerId("ActiveMqBroker" + 1); broker.setUseJmx(false); broker.start(); Now we have one application which accesses it inside the same pod. This works fine. However when another application accesses this activemq server from another pod using service name like tcp://service.hostname:41415 then

Log rotation on logs consuming disk space in Google Cloud Kubernetes pod

别说谁变了你拦得住时间么 提交于 2020-06-25 18:08:57
问题 We have a pod in a Google Cloud Platform Kubernetes cluster writing JsonFormatted to StdOut. This is picked up by Stackdriver out of box. However, we see the disk usage of the pod just growing and growing, and we can't understand how to set a max size on the Deployment for log rotation. Documentation on Google Cloud and Kubernetes is unclear on this. This is just the last hour: 回答1: Are you sure that disk usage of the pod is high because of the logs? If the application writes logs to stdout,

Log rotation on logs consuming disk space in Google Cloud Kubernetes pod

拥有回忆 提交于 2020-06-25 18:08:41
问题 We have a pod in a Google Cloud Platform Kubernetes cluster writing JsonFormatted to StdOut. This is picked up by Stackdriver out of box. However, we see the disk usage of the pod just growing and growing, and we can't understand how to set a max size on the Deployment for log rotation. Documentation on Google Cloud and Kubernetes is unclear on this. This is just the last hour: 回答1: Are you sure that disk usage of the pod is high because of the logs? If the application writes logs to stdout,

Triggering auto-expansion of openshift persistent volumes

扶醉桌前 提交于 2020-06-16 17:32:34
问题 I have deployed mysql stateful-set with one master and two slave pods. Each of the pod has their own Persistent Volume Claim(PVC) as per storage requested by user. I am able to expand any persistent volume by editing their respective PVC. I am trying to implement a service to trigger auto-expansion of respective volume as soon as consumed storage of any pod crosses 90%. (Preferred in Java ) As per my investigation, I can use patch request for editing any PVC json file for desired storage.

How to get git commit sha1 using kubectl cmd?

江枫思渺然 提交于 2020-06-16 03:23:28
问题 How can I using kubectl cmd to get specific pod's commit sha1 like: kubectl get git_commit_sha1 [pod_name] 回答1: There is no way to achieve what you want at the moment using kubectl. They only possible way would be if your docker image have git command built in. In that case you could use kubectl exec to get the information you want. Example: $ kubectl exec -ti podname -- git show Alternatively, if you really think your idea makes sense and may be useful to more people, you can open a feature

kubernetes persistence volume and persistence volume claim exceeded storage

放肆的年华 提交于 2020-05-30 06:42:49
问题 By following kubernetes guide i have created a pv, pvc and pod. i have claimed only 10Mi of out of 20Mi pv. I have copied 23Mi that is more than my pv. But my pod is still running. Can any one explain ? pv-volume.yaml kind: PersistentVolume apiVersion: v1 metadata: name: task-pv-volume labels: type: local spec: storageClassName: manual capacity: storage: 20Mi accessModes: - ReadWriteOnce hostPath: path: "/mnt/data" pv-claim.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: task