kubernetes-pod

kubectl exec into pod resulting in Unable to use a TTY error every time if run through automation

。_饼干妹妹 提交于 2020-05-17 09:24:32
问题 i have a simple automation to exec into a kubernetes pod but it always results in the below error :- kubectl exec -it my-pod -c my-contaner -n my-namespace /bin/bash Unable to use a TTY - input is not a terminal or the right kind of file I am trying to run a simple shell script using jenkins to exec into a pod and execute ls -las in the root directory but its not allowing to exec into the pod automatically. The same thing works fine if i do manually on the linux server terminal. Can someone

How do I make sure my cronjob job does NOT retry on failure?

拈花ヽ惹草 提交于 2020-05-17 06:39:26
问题 I have a Kubernetes Cronjob that runs on GKE and runs Cucumber JVM tests. In case a Step fails due to assertion failure, some resource being unavailable, etc., Cucumber rightly throws an exception which leads the Cronjob job to fail and the Kubernetes pod's status changes to ERROR . This leads to creation of a new pod that tries to run the same Cucumber tests again, which fails again and retries again. I don't want any of these retries to happen. If a Cronjob job fails, I want it to remain in

Why am I not able to run sparkPi example on a Kubernetes (K8s) cluster?

▼魔方 西西 提交于 2020-05-17 04:13:24
问题 I have a K8s cluster up and running, on VMs inside VMWare Workstation, as of now. I'm trying to deploy a Spark application natively using the official documentation from here. However, I also landed on this article which made it clearer, I felt. Now, earlier my setup was running inside nested VMs, basically my machine is on Win10 and I had an Ubuntu VM inside which I had 3 more VMs running for the cluster (not the best idea, I know). When I tried to run my setup by following the article

How can I get pod external IP from Go code at runtime

点点圈 提交于 2020-04-11 07:09:07
问题 Pretty simple question, how can I get the Pod where my current go code is running? I need it because for some reason, I need to ping directly the Pod's code instead of using my regular endpoint which would be the load balancer. My current config: apiVersion: v1 kind: Service metadata: name: web-socket-service-api spec: ports: # Port that accepts gRPC and JSON/HTTP2 requests over HTTP. - port: 8080 targetPort: 8080 protocol: TCP name: grpc # Port that accepts gRPC and JSON/HTTP2 requests over

DNS pods fail afrer kubeadm init

ぐ巨炮叔叔 提交于 2020-03-25 19:23:27
问题 I'm running kubeadm init --pod-network-cidr=10.244.0.0/16 to deploy k8s. After it i'm running kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml to install Flannel pod network. Right after it i have core dns pods up and runnig, but logs say: [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7 CoreDNS-1.6.5 linux/amd64, go1.13.4, c2fd1b2 [ERROR] plugin/errors: 2

2 Kubernetes pod communicating without knowing the exposed address

烈酒焚心 提交于 2020-02-07 04:06:59
问题 I plan to deploy 2 kubernetes pods with a NodePort service to expose them into the network. Now i want pod 1 be able to access the pod 2 by his service. The Problem is i write the Deployment files and i don't know the ip address pod 2 will get from the cluster, but i need to set the address into the file from pod 1 wiva a env. variable. Is there a other way in a kubernetes cluster to make them accessible by sth. like the name of the service or sth. like this? failed to google for this case,

Kubernetes: Policy check before container execution

北城余情 提交于 2020-02-05 02:31:12
问题 I am new to Kubernetes, I am looking to see if its possible to hook into the container execution life cycle events in the orchestration process so that I can call an API to pass the details of the container and see if its allowed to execute this container in the given environment, location etc. An example check could be: container can only be run in a Europe or US data centers. so before someone tries to execute this container, outside this region data centers, it should not be allowed. Can

Kubernetes: Policy check before container execution

别说谁变了你拦得住时间么 提交于 2020-02-05 02:31:09
问题 I am new to Kubernetes, I am looking to see if its possible to hook into the container execution life cycle events in the orchestration process so that I can call an API to pass the details of the container and see if its allowed to execute this container in the given environment, location etc. An example check could be: container can only be run in a Europe or US data centers. so before someone tries to execute this container, outside this region data centers, it should not be allowed. Can

nslookup does not resolve Kubernetes.default

天涯浪子 提交于 2020-01-25 08:02:28
问题 I tried the following command on my minikube setup to verify if dns is working fine or not. kubectl exec -ti busybox -- nslookup kubernetes.default but this is the output I am getting ' Server: 10.96.0.10 Address 1: 10.96.0.10 nslookup: can't resolve 'kubernetes.default' command terminated with exit code 1' Apart from that, I checked the coredns pod logs it shows something like below: ` 2019-11-07T12:25:23.694Z [ERROR] plugin/errors: 0 5606995447546819070.2414697521008405831. HINFO: read udp

Some requests fails during autoscaling in kubernetes

被刻印的时光 ゝ 提交于 2020-01-23 07:50:26
问题 I set up a k8s cluster on microk8s and I ported my application to it. I also added a horizontal auto-scaler which adds pods based on the cpu load. The auto-scaler works fine and it adds pods when there is load beyond the target and when I remove the load after some time it will kill the pods. The problem is I noticed at the exact same moments that the auto-scaler is creating new pods some of the requests fail: POST Response Code : 200 POST Response Code : 200 POST Response Code : 200 POST