问题
I was able to follow the documentation and get a kubernetes cluster up. But I would like to add a second master node I tried this on the second node but seeing an error
[root@kubemaster02 ~]# kubeadm init --apiserver-advertise-
address=10.122.161.XX --pod-network-cidr=10.244.0.0/16 --kubernetes-
version=v1.10.0
[init] Using Kubernetes version: v1.10.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
[WARNING SystemVerification]: docker version is greater than the most
recently validated version. Docker version: 18.03.0-ce. Max validated
version: 17.03
[WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[preflight] Some fatal errors occurred:
[ERROR Port-10250]: Port 10250 is in use
[preflight] If you know what you are doing, you can make a check non-fatal
with `--ignore-preflight-errors=...`
My question is if this is the correct way to add the second master, by doing an init ? another question I have is how to tell if the node is configured as a master or not, the following command is not showing the ROLES for some reason (may be older versions)
[root@master01 ~]# kubectl get nodes -o wide
NAME STATUS AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION
kubemaster01 Ready 215d v1.8.1 <none> CentOS Linux 7 (Core) 3.10.0-693.5.2.el7.x86_64
kubemaster02 Ready 132d v1.8.4 <none> CentOS Linux 7 (Core) 3.10.0-693.5.2.el7.x86_64
kubenode01 Ready 215d v1.8.1 <none> CentOS Linux 7 (Core) 3.10.0-693.5.2.el7.x86_64
kubenode02 Ready 214d v1.8.1 <none> CentOS Linux 7 (Core) 3.10.0-693.5.2.el7.x86_64
回答1:
In your case, please look what is running on the port 10250 :
netstat -nlp | grep 10250
Because your error is:
[ERROR Port-10250]: Port 10250 is in use
In general, you can bootstrap additional master, and have 2 masters. Before running kubeadm on the other master, you need to first copy the K8s CA cert from kubemaster01
. To do this, you have two options:
Option 1: Copy with scp
scp root@<kubemaster01-ip-address>:/etc/kubernetes/pki/* /etc/kubernetes/pki
Option 2: Copy paste
Copy the contents of /etc/kubernetes/pki/ca.crt
, /etc/kubernetes/pki/ca.key
, /etc/kubernetes/pki/sa.key
and /etc/kubernetes/pki/sa.pub
and create these files manually on kubemaster02
The next step is to create a Load Balancer that sits in front of your master nodes. How you do this depends on your environment; you could, for example, leverage a cloud provider Load Balancer, or set up your own using NGINX, keepalived, or HAproxy.
For bootstrapping use the config.yaml
:
cat >config.yaml <<EOF
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
advertiseAddress: <private-ip>
etcd:
endpoints:
- https://<your-ectd-ip>:2379
caFile: /etc/kubernetes/pki/etcd/ca.pem
certFile: /etc/kubernetes/pki/etcd/client.pem
keyFile: /etc/kubernetes/pki/etcd/client-key.pem
networking:
podSubnet: <podCIDR>
apiServerCertSANs:
- <load-balancer-ip>
apiServerExtraArgs:
apiserver-count: "2"
EOF
Ensure that the following placeholders are replaced:
your-ectd-ip
the IP address your etcdprivate-ip
it with the private IPv4 of the master server.<podCIDR>
with your Pod CIDRload-balancer-ip
endpoint to connect your masters
then you can run the command:
kubeadm init --config=config.yaml
and bootstrap the masters.
But if you really want a HA cluster please follow the documentation's minimal requirements and use 3 nodes for masters. They create these requirements for etcd quorum. On every master node they run the etcd which works very close to masters.
来源:https://stackoverflow.com/questions/49887597/add-a-second-master-node-for-high-availabity-in-kubernetes