问题
After upgrading my cluster in GKE the dashboard will no longer accept certificate authentication.
No problem there's a token available in the .kube/config says my colleague
user:
auth-provider:
config:
access-token: REDACTED
cmd-args: config config-helper --format=json
cmd-path: /home/user/workspace/google-cloud-sdk/bin/gcloud
expiry: 2018-01-09T08:59:18Z
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
Except in my case there isn't...
user:
auth-provider:
config:
cmd-args: config config-helper --format=json
cmd-path: /home/user/Dev/google-cloud-sdk/bin/gcloud
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
I've tried re-authenticating with gcloud, comparing gcloud settings with colleagues, updating gcloud, re-installing gcloud, checking permissions in Cloud Platform. Pretty much everything I can think of, by still no access token will be generated.
Can anyone help please?!
$ gcloud container clusters get-credentials cluster-3 --zone xxx --project xxx
Fetching cluster endpoint and auth data.
kubeconfig entry generated for cluster-3.
$ gcloud config list
[core]
account = xxx
disable_usage_reporting = False
project = xxx
Your active configuration is: [default]
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.4"
回答1:
Ok, very annoying and silly answer - you have to make any request using kubectl for the token to be generated and saved into the kubeconfig file.
回答2:
You mentioned you "updated your cluster in GKE" - I'm sure what you actually did, so I'm interpreting it as generating a new cluster. There are two things to insure that you didn't appear to cover in your problem statement - one is that kubectl is enabled and two is that you can actually generate a new kubeconfig file (you could easily be referring to the older ~/.kube/conf from the prior-to-the-updated cluster in GKE). Therefore, doing these commands insures you have the correct authentication required and that token should become available:
$ gcloud components install kubectl
$ gcloud container clusters create <cluster-name>
$ gcloud container clusters get-credentials <cluster-name>
Then...Generate the kubeconfig file (assuming you have a running cluster on GCP and a service account configured for the project/GKE, have run kubectl proxy
, etc.):$ gcloud container clusters get-credentials <cluster_id>
This will create a ${HOME}/.kube/config file which has the token in it. Inspect the config file and you'll see the token value:$ cat ~/.kube/config
OR$ kubectl config view
will display it to the screen...
...
users:
- name: gke_<project_id><zone><cluster_id>
user:
auth-provider:
config:
access-token: **<COPY_THIS_TOKEN>**
cmd-args: config config-helper --format=json
cmd-path: ...path-to.../google-cloud-sdk/bin/gcloud
expiry: 2018-04-13T23:11:15Z
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
With that token copied, go back to http://localhost:8001/ and select "token", then paste the token value there...good to go
来源:https://stackoverflow.com/questions/48164739/no-access-token-in-kube-config