kubernetes-pod

kubectl list / delete all completed jobs

依然范特西╮ 提交于 2019-12-11 02:56:37
问题 I'm looking for a kubectl command to list / delete all completed jobs I've try: kubectl get job --field-selector status.succeeded=1 But I get: enfield selector "status.succeeded=1": field label "status.succeeded" not supported for batchv1.Jobter code here What are the possible fields for --fieldSelector when getting jobs ? Is there a better way to do this ? 回答1: What you can do to list all the succeeded jobs is first get all the jobs and then filter the output: kubectl get job --all

what's meaning the container_cpu_cfs_throttled_seconds_total metrics

萝らか妹 提交于 2019-12-10 15:28:20
问题 cadvisor has two metrics container_cpu_cfs_throttled_seconds_total and container_cpu_cfs_throttled_periods_total I have confuse what does that means .. I have found about two explain: container run with cpu limit, when container cpu over limit , the container will be "throttled" and add time to container_cpu_cfs_throttled_seconds_total that means : (1). only container cpu over limit, rate(container_cpu_cfs_throttled_seconds_total) > 0. (2). we can use this metrics to alert container cpu over

Kubernetes pod distribution amongst nodes with preferred mode

▼魔方 西西 提交于 2019-12-10 11:09:08
问题 I am working on migrating my applications to Kubernetes. I am using EKS. I want to distribute my pods to different nodes, to avoid having a single point of failure. I read about pod-affinity and anti-affinity and required and preferred mode. This answer gives a very nice way to accomplish this. But my doubt is, let's say if I have 3 nodes, of which 2 are already full(resource-wise). If I use requiredDuringSchedulingIgnoredDuringExecution , k8s will spin-up new nodes and will distribute the

How to Route to specific pod through Kubernetes Service (like a Gateway API)

限于喜欢 提交于 2019-12-09 00:57:21
问题 I am running Kubernetes on "Docker Desktop" in Windows. I have a LoadBalancer Service for a deployment which has 3 replicas. I would like to access SPECIFIC pod through some means (such as via URL path : < serviceIP >:8090/pod1) . Is there any way to achieve this usecase? deployment.yaml : apiVersion: v1 kind: Service metadata: name: my-service1 labels: app: stream spec: ports: - port: 8090 targetPort: 8090 name: port8090 selector: app: stream # clusterIP: None type: LoadBalancer ---

Relation between preStop hook and terminationGracePeriodSeconds

亡梦爱人 提交于 2019-12-07 18:55:17
问题 Basically I am trying to do is play around with pod lifecycle and check if we can do some cleanup/backup such as copy logs before the pod terminates. What I need : Copy logs/heapdumps from container to a hostPath/S3 before terminating What I tried: I used a preStop hook with a bash command to echo a message (just to see if it works !!). Used terminationGracePeriodSeconds with a delay to preStop and toggle them to see if the process works. Ex. keep terminationGracePeriodSeconds:30 sec (default

Watch kubernetes pod status to be completed in client-go

醉酒当歌 提交于 2019-12-07 10:41:48
问题 I am creating a pod in k8 client go and making a watch to get notified for when the pod has completed so that i can read the logs of the pod. The watch interface doesnt seem to provide any events on the channel. Here is the code, how would I get notified that the pod status is now completed and is ready to read the logs func readLogs(clientset *kubernetes.Clientset) { // namespace := "default" // label := "cithu" var ( pod *v1.Pod // watchface watch.Interface err error ) // returns a pod

Relation between preStop hook and terminationGracePeriodSeconds

梦想与她 提交于 2019-12-06 06:29:11
Basically I am trying to do is play around with pod lifecycle and check if we can do some cleanup/backup such as copy logs before the pod terminates. What I need : Copy logs/heapdumps from container to a hostPath/S3 before terminating What I tried: I used a preStop hook with a bash command to echo a message (just to see if it works !!). Used terminationGracePeriodSeconds with a delay to preStop and toggle them to see if the process works. Ex. keep terminationGracePeriodSeconds:30 sec (default) and set preStop command to sleep by 50 sec and the message should not be generated since the

Connect to other pod from a pod

江枫思渺然 提交于 2019-12-06 02:49:35
问题 Basically, i have a Deployment that creates 3 containers which scale automatically: PHP-FPM, NGINX and the container that contains the application, all set up with secrets, services and ingress. The application also share the project between PHP-FPM and NGINX, so it's all set up. Since i want to explore more with K8s, i decided to create a pod with Redis that also mounts a persistent disk (but that's not important). I have also created a service for redis and all works perfectly fine if i SSH

Watch kubernetes pod status to be completed in client-go

戏子无情 提交于 2019-12-05 15:41:49
I am creating a pod in k8 client go and making a watch to get notified for when the pod has completed so that i can read the logs of the pod. The watch interface doesnt seem to provide any events on the channel. Here is the code, how would I get notified that the pod status is now completed and is ready to read the logs func readLogs(clientset *kubernetes.Clientset) { // namespace := "default" // label := "cithu" var ( pod *v1.Pod // watchface watch.Interface err error ) // returns a pod after creation pod, err = createPod(clientset) fmt.Println(pod.Name, pod.Status, err) if watchface, err =

Some requests fails during autoscaling in kubernetes

ⅰ亾dé卋堺 提交于 2019-12-05 11:31:44
I set up a k8s cluster on microk8s and I ported my application to it. I also added a horizontal auto-scaler which adds pods based on the cpu load. The auto-scaler works fine and it adds pods when there is load beyond the target and when I remove the load after some time it will kill the pods. The problem is I noticed at the exact same moments that the auto-scaler is creating new pods some of the requests fail: POST Response Code : 200 POST Response Code : 200 POST Response Code : 200 POST Response Code : 200 POST Response Code : 200 POST Response Code : 502 java.io.IOException: Server returned