问题
I met a couple of problems when installing the Kubernetes with Kubeadm. I am working behind the corporate network. I declared the proxy settings in the session environment.
$ export http_proxy=http://proxy-ip:port/
$ export https_proxy=http://proxy-ip:port/
$ export no_proxy=master-ip,node-ip,127.0.0.1
After installing all the necessary components and dependencies, I began to initialize the cluster. In order to use the current environment variables, I used sudo -E bash
.
$ sudo -E bash -c "kubeadm init --apiserver-advertise-address=192.168.1.102 --pod-network-cidr=10.244.0.0/16"
Then the output message hung at the message below forever.
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.3
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] Starting the kubelet service
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] API Server serving cert is signed for DNS names [loadbalancer kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.102]
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key and public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[apiclient] Created API client, waiting for the control plane to become ready
Then I found that none of the kube components was up while kubelet
kept requesting kube-apiserver
. sudo docker ps -a
returned nothing.
What is the possible root cause of it?
Thanks in advance.
回答1:
I would strongly suspect it is trying to pull down the docker images for gcr.io/google_containers/hyperkube:v1.7.3
or whatever, which requires teaching the docker daemon about the proxies, in this way using systemd
That would certainly explain why docker ps -a
shows nothing, but I would expect the dockerd logs journalctl -u docker.service
(or its equivalent in your system) to complain about its inability to pull from gcr.io
Based on what I read from the kubeadm reference guide, they are expecting you to patch the systemd config on the target machine to expose those environment variables, and not just set them within the shell that launched kubeadm (although that certainly could be a feature request)
来源:https://stackoverflow.com/questions/45580788/how-to-install-kubernetes-cluster-behind-proxy-with-kubeadm