kubectl

kubectl wait sometimes timed out unexpectedly

▼魔方 西西 提交于 2020-07-08 03:41:26
问题 I just add kubectl wait --for=condition=ready pod -l app=appname --timeout=30s in the last step of BitBucket Pipeline to report any deployment failure if the new pod somehow producing error. I realize that the wait doesn't really consistent. Sometimes it gets timed out even if new pod from new image doesn't producing any error, pod turn to ready state. Try to always change deployment.yaml or push newer image everytime to test this, the result is inconsistent. BTW, I believe using kubectl

Can't connect to the ETCD of Kubernetes

☆樱花仙子☆ 提交于 2020-06-29 03:58:49
问题 I've accidentally drained/uncordoned all nodes in Kubernetes (even master) and now I'm trying to bring it back by connecting to the ETCD and manually change some keys in there. I successfuly bashed into etcd container: $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 8fbcb67da963 quay.io/coreos/etcd:v3.3.10 "/usr/local/bin/etcd" 17 hours ago Up 17 hours etcd1 a0d6426df02a cd48205a40f0 "kube-controller-man…" 17 hours ago Up 17 hours k8s_kube-controller-manager_kube-controller

kubectl connect with remote cluster from scratch

↘锁芯ラ 提交于 2020-06-28 09:57:35
问题 I've created a local Kubernetes cluster using ansible. Everything is running but now I try to connect my kubectl with the cluster (in the VM's). My cluster is running on https://IP:6443 First I got: $ kubectl get pods The connection to the server localhost:8080 was refused - did you specify the right host or port? So I tried this solution: kubectl config set-credentials kubeuser/IP --username=kubeuser --password=kubepassword kubectl config set-cluster IP --insecure-skip-tls-verify=true -

kubectl connect with remote cluster from scratch

爱⌒轻易说出口 提交于 2020-06-28 09:56:35
问题 I've created a local Kubernetes cluster using ansible. Everything is running but now I try to connect my kubectl with the cluster (in the VM's). My cluster is running on https://IP:6443 First I got: $ kubectl get pods The connection to the server localhost:8080 was refused - did you specify the right host or port? So I tried this solution: kubectl config set-credentials kubeuser/IP --username=kubeuser --password=kubepassword kubectl config set-cluster IP --insecure-skip-tls-verify=true -

EKS: Unable to pull logs from pods

大兔子大兔子 提交于 2020-06-26 06:47:33
问题 kubectl logs command intermittently fails with "getsockopt: no route to host" error. # kubectl logs -f mypod-5c46d5c75d-2Cbtj Error from server: Get https://X.X.X.X:10250/containerLogs/default/mypod-5c46d5c75d-2Cbtj/metaservichart?follow=true: dial tcp X.X.X.X:10250: getsockopt: no route to host If I run the same command 5-6 times it works. I am not sure why this is happening. Any help will be really appreciated. 回答1: Just fyi, I just tried using another VPC 172.18.X.X for EKS, and all

Accidentally drained all nodes in Kubernetes (even master). How can I bring my Kubernetes back?

烂漫一生 提交于 2020-06-25 06:55:13
问题 I accidentally drained all nodes in Kubernetes (even master). How can I bring my Kubernetes back? kubectl is not working anymore: kubectl get nodes Result: The connection to the server 172.16.16.111:6443 was refused - did you specify the right host or port? Here is the output of systemctl status kubelet on master node (node1): ● kubelet.service - Kubernetes Kubelet Server Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: enabled) Active: active (running) since Tue

Kubernetes share volume between containers inside a Deployment

杀马特。学长 韩版系。学妹 提交于 2020-06-24 14:53:31
问题 Before posting this question I followed this answer How to mimic '--volumes-from' in Kubernetes but it didn't work for me. I have 2 containers: node : its image contains all the files related to the app ( inside /var/www ) nginx : it needs to access the files inside the node image (especially the /clientBuild folder where I have all the assets) What is inside the node image: $ docker run node ls -l > clientBuild/ > package.json > ... A part of the nginx.prod.conf : location ~* \.(jpeg|jpg|gif

Kubernetes share volume between containers inside a Deployment

寵の児 提交于 2020-06-24 14:52:18
问题 Before posting this question I followed this answer How to mimic '--volumes-from' in Kubernetes but it didn't work for me. I have 2 containers: node : its image contains all the files related to the app ( inside /var/www ) nginx : it needs to access the files inside the node image (especially the /clientBuild folder where I have all the assets) What is inside the node image: $ docker run node ls -l > clientBuild/ > package.json > ... A part of the nginx.prod.conf : location ~* \.(jpeg|jpg|gif

Getting “ErrImageNeverPull” in pods

情到浓时终转凉″ 提交于 2020-06-12 02:49:08
问题 Am using minikube to test out the deployment and was going through thislink And my manifest file for deployment is like apiVersion: extensions/v1beta1 kind: Deployment metadata: name: webapp spec: replicas: 1 template: metadata: labels: app: webapp spec: containers: - name: webapp imagePullPolicy: Never # <-- here we go! image: sams ports: - containerPort: 80 and after this when i tried to execute below commands got output user@usesr:~/Downloads$ kubectl create -f mydeployment.yaml --validate

How to save content of a configmap to a file with kubectl and jsonpath?

自作多情 提交于 2020-06-11 20:11:24
问题 I'm trying to save the contents of a configmap to a file on my local hard drive. Kubectl supports selecting with JSONPath but I can't find the expression I need to select just the file contents. The configmap was created using the command kubectl create configmap my-configmap --from-file=my.configmap.json=my.file.json When I run kubectl describe configmap my-configmap I see the following output: Name: my-configmap Namespace: default Labels: <none> Annotations: <none> Data ==== my.file.json: -