Kubernetes: Failed to get GCE GCECloudProvider with error <nil>

时光怂恿深爱的人放手 提交于 2019-12-11 16:01:46

问题


I have set up a custom kubernetes cluster on GCE using kubeadm. I am trying to use StatefulSets with persistent storage.

I have the following configuration:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: gce-slow
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-standard
  zones: europe-west3-b
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: myname
  labels:
    app: myapp
spec:
  serviceName: myservice
  replicas: 1
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
        - name: mycontainer
          image: ubuntu:16.04
          env:
          volumeMounts:
          - name: myapp-data
            mountPath: /srv/data
      imagePullSecrets:
      - name: sitesearch-secret
  volumeClaimTemplates:
  - metadata:
      name: myapp-data
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: gce-slow
      resources:
        requests:
          storage: 1Gi

And I get the following error:

Nopx@vm0:~$ kubectl describe pvc
 Name:          myapp-data-myname-0
 Namespace:     default
 StorageClass:  gce-slow
 Status:        Pending
 Volume:
 Labels:        app=myapp
 Annotations:   volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/gce-pd
 Finalizers:    [kubernetes.io/pvc-protection]
 Capacity:
 Access Modes:
 Events:
   Type     Reason              Age   From                         Message
   ----     ------              ----  ----                         -------
   Warning  ProvisioningFailed  5s    persistentvolume-controller  Failed to provision volume 
 with StorageClass "gce-slow": Failed to get GCE GCECloudProvider with error <nil>

I am treading in the dark and do not know what is missing. It seems logical that it doesn't work, since the provisioner never authenticates to GCE. Any hints and pointers are very much appreciated.

EDIT

I Tried the solution here, by editing the config file in kubeadm with kubeadm config upload from-file, however the error persists. The kubadm config looks like this right now:

api:
  advertiseAddress: 10.156.0.2
  bindPort: 6443
  controlPlaneEndpoint: ""
auditPolicy:
  logDir: /var/log/kubernetes/audit
  logMaxAge: 2
  path: ""
authorizationModes:
- Node
- RBAC
certificatesDir: /etc/kubernetes/pki
cloudProvider: gce
criSocket: /var/run/dockershim.sock
etcd:
  caFile: ""
  certFile: ""
  dataDir: /var/lib/etcd
  endpoints: null
  image: ""
  keyFile: ""
imageRepository: k8s.gcr.io
kubeProxy:
  config:
    bindAddress: 0.0.0.0
    clientConnection:
      acceptContentTypes: ""
      burst: 10
      contentType: application/vnd.kubernetes.protobuf
      kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
      qps: 5
    clusterCIDR: 192.168.0.0/16
    configSyncPeriod: 15m0s
    conntrack:
      max: null
      maxPerCore: 32768
      min: 131072
      tcpCloseWaitTimeout: 1h0m0s
      tcpEstablishedTimeout: 24h0m0s
    enableProfiling: false
    healthzBindAddress: 0.0.0.0:10256
    hostnameOverride: ""
    iptables:
      masqueradeAll: false
      masqueradeBit: 14
      minSyncPeriod: 0s
      syncPeriod: 30s
    ipvs:
      minSyncPeriod: 0s
      scheduler: ""
      syncPeriod: 30s
    metricsBindAddress: 127.0.0.1:10249
    mode: ""
    nodePortAddresses: null
    oomScoreAdj: -999
    portRange: ""
    resourceContainer: /kube-proxy
    udpIdleTimeout: 250ms
kubeletConfiguration: {}
kubernetesVersion: v1.10.2
networking:
  dnsDomain: cluster.local
  podSubnet: 192.168.0.0/16
  serviceSubnet: 10.96.0.0/12
nodeName: mynode
privilegedPods: false
token: ""
tokenGroups:
- system:bootstrappers:kubeadm:default-node-token
tokenTTL: 24h0m0s
tokenUsages:
- signing
- authentication
unifiedControlPlaneImage: ""

Edit

The issue was resolved in the comments thanks to Anton Kostenko. The last edit coupled with kubeadm upgrade solves the problem.


回答1:


The answer took me a while but here it is:

Using the GCECloudProvider in Kubernetes outside of the Google Kubernetes Engine has the following prerequisites (the last point is Kubeadm specific):

  1. The VM needs to be run with a service account that has the right to provision disks. Info on how to run a VM with a service account can be found here

  2. The Kubelet needs to run with the argument --cloud-provider=gce. For this the KUBELET_KUBECONFIG_ARGS in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf have to be edited. The Kubelet can then be restarted with sudo systemctl restart kubelet

  3. The Kubernetes cloud-config file needs to be configured. The file can be found at /etc/kubernetes/cloud-config and the following content is enough to get the cloud provider to work:

    [Global]
    project-id = "<google-project-id>"
    
  4. Kubeadm needs to have GCE configured as its cloud provider. The config posted in the question works fine for this. However, the nodeName has to be changed.



来源:https://stackoverflow.com/questions/50100219/kubernetes-failed-to-get-gce-gcecloudprovider-with-error-nil

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!