kubectl

kubectl --token=$TOKEN doesn't run with the permissions of the token

让人想犯罪 __ 提交于 2021-02-08 06:59:24
问题 When I am using the command kubectl with the --token flag and specify a token, it still uses the administrator credentials from the kubeconfig file. This is what I did: NAMESPACE="default" SERVICE_ACCOUNT_NAME="sa1" kubectl create sa $SERVICE_ACCOUNT_NAME kubectl create clusterrolebinding list-pod-clusterrolebinding \ --clusterrole=list-pod-clusterrole \ --serviceaccount="$NAMESPACE":"$SERVICE_ACCOUNT_NAME" kubectl create clusterrole list-pod-clusterrole \ --verb=list \ --resource=pods TOKEN=

Kubernetes - Find out service ip range CIDR programatically

不想你离开。 提交于 2021-02-08 05:29:20
问题 I need a way to get service cluster ip range (as CIDR) that works accross all Kubernetes clusters. I tried the following, which works fine for clusters created with kubeadm as it greps arguments of apiserver pod: $ kubectl cluster-info dump | grep service-cluster-ip-range "--service-cluster-ip-range=10.96.0.0/12", This does not work on all Kubernetes clusters, i.e. gcloud So the question is, what is the best way to get service ip range programatically? 回答1: I don't think there is a way to

Can't connect to container cluster: environment variable HOME or KUBECONFIG must be set when running gcloud get credentials

大憨熊 提交于 2021-02-07 14:28:33
问题 For some reason I can't connect to the cluster. Having followed the instructions per google container-engine after setting up the cluster, I get the following error: ERROR: (gcloud.container.clusters.get-credentials) environment variable HOME or KUBECONFIG must be set to store credentials for kubectl When running this command: gcloud container clusters get-credentials [my cluster name] --zone us-central1-b --project [my project name] Any ideas how I should be setting the variable HOME or

What happens when you drain nodes in a Kubernetes cluster?

天大地大妈咪最大 提交于 2021-02-07 08:53:43
问题 I'd like to get some clarification for preparation for maintenance when you drain nodes in a Kubernetes cluster: Here's what I know when you run kubectl drain MY_NODE : Node is cordoned Pods are gracefully shut down You can opt to ignore Daemonset pods because if they are shut down, they'll just be re-spawned right away again. I'm confused as to what happens when a node is drained though. Questions: What happens to the pods? As far as I know, there's no 'live migration' of pods in Kubernetes.

What happens when you drain nodes in a Kubernetes cluster?

北战南征 提交于 2021-02-07 08:53:28
问题 I'd like to get some clarification for preparation for maintenance when you drain nodes in a Kubernetes cluster: Here's what I know when you run kubectl drain MY_NODE : Node is cordoned Pods are gracefully shut down You can opt to ignore Daemonset pods because if they are shut down, they'll just be re-spawned right away again. I'm confused as to what happens when a node is drained though. Questions: What happens to the pods? As far as I know, there's no 'live migration' of pods in Kubernetes.

What happens when you drain nodes in a Kubernetes cluster?

泄露秘密 提交于 2021-02-07 08:52:33
问题 I'd like to get some clarification for preparation for maintenance when you drain nodes in a Kubernetes cluster: Here's what I know when you run kubectl drain MY_NODE : Node is cordoned Pods are gracefully shut down You can opt to ignore Daemonset pods because if they are shut down, they'll just be re-spawned right away again. I'm confused as to what happens when a node is drained though. Questions: What happens to the pods? As far as I know, there's no 'live migration' of pods in Kubernetes.

Kubernetes has a ton of pods in error state that can't seem to be cleared

夙愿已清 提交于 2021-02-06 15:54:05
问题 I was originally trying to run a Job that seemed to get stuck in a CrashBackoffLoop. Here was the service file: apiVersion: batch/v1 kind: Job metadata: name: es-setup-indexes namespace: elk-test spec: template: metadata: name: es-setup-indexes spec: containers: - name: es-setup-indexes image: appropriate/curl command: ['curl -H "Content-Type: application/json" -XPUT http://elasticsearch.elk-test.svc.cluster.local:9200/_template/filebeat -d@/etc/filebeat/filebeat.template.json'] volumeMounts:

Kubernetes has a ton of pods in error state that can't seem to be cleared

浪尽此生 提交于 2021-02-06 15:53:53
问题 I was originally trying to run a Job that seemed to get stuck in a CrashBackoffLoop. Here was the service file: apiVersion: batch/v1 kind: Job metadata: name: es-setup-indexes namespace: elk-test spec: template: metadata: name: es-setup-indexes spec: containers: - name: es-setup-indexes image: appropriate/curl command: ['curl -H "Content-Type: application/json" -XPUT http://elasticsearch.elk-test.svc.cluster.local:9200/_template/filebeat -d@/etc/filebeat/filebeat.template.json'] volumeMounts:

TLS doesn't work with LoadBalancer backed Service in Kubernetes

旧时模样 提交于 2021-02-05 09:32:29
问题 I am trying to expose an application in my cluster by creating a service type as load balancer. The reason for this is that I want this app to have a separate channel for communication. I have a KOPS cluster. I want to use AWS's network load balancer so that it gets a static IP. When I create the Service with port 80 mapped to the port that the app is running on everything works but when I try to add port 443 it just times out. Here is the configuration that works - apiVersion: v1 metadata:

kubernetes: no log retrieved by kubectl

大兔子大兔子 提交于 2021-02-05 06:14:23
问题 I am trying to run a simple image on a specific namespace to debug some issues kubectl run busy --image busybox --namespace my-local-dev deployment.apps/busy created However for some reason the container keeps restarting busy-67b577b945-ng2lt 0/1 CrashLoopBackOff 5 3m and I am unable to get any logs, even with the --previous flag $ kubectl logs -f --namespace my-local-dev busy-67b577b945-ng2lt --previous Unable to retrieve container logs for docker:/