问题
We currently have 2 Kubernetes clusters:
One setup with Kops running on AWS
One setup with Kubeadm running on our own hardware
We want to combine them to only have a single cluster to manage.
The master could end up being on AWS or on our servers, both are fine.
We can't find a way to add nodes configured with one cluster to the other.
kubeadm
is not made available on nodes setup with Kops, so we can't do egkubeadm token create --print-join-command
Kops doesn't seem to have utilities to let us add arbitrary nodes, see how to add an node to my kops cluster? (node in here is my external instance)
This issue raises the same question but was left unanswered: https://github.com/kubernetes/kops/issues/5024
回答1:
You can join the nodes manually, but this is really not a recommend way of doing things.
If you're using kubeadm, you probably already have all the relevant components installed on the workers to have them join in a valid way. What I'd say the process to follow is:
run kubeadm reset
on the on-prem in question
login to the kops node, and examine the kubelet configuration:
systemctl cat kubelet
In here, you'll see the kubelet config is specified at /etc/sysconfig/kubelet
. You'll need to copy that file and ensure the on-prem node has it in its systemd startup config
Copy the relevent config over to the on-prem node. You'll need to remove any references to the AWS cloud provider stuff, as well as make sure the hostname is valid. Here's an example config I copied from a kops node, and modified:
DAEMON_ARGS="--allow-privileged=true --cgroup-root=/ --cluster-dns=100.64.0.10 --cluster-domain=cluster.local --enable-debugging-handlers=true - --feature-gates=ExperimentalCriticalPodAnnotation=true --hostname-override=<my_dns_name> --kubeconfig=/var/lib/kubelet/kubeconfig --network-plugin=cni --node-labels=kops.k8s.io/instancegroup=onpremnodes,kubernetes.io/role=node,node-role.kubernetes.io/node= --non-masquerade-cidr=100.64.0.0/10 --pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0 --pod-manifest-path=/etc/kubernetes/manifests --register-schedulable=true --v=2 --cni-bin-dir=/opt/cni/bin/ --cni-conf-dir=/etc/cni/net.d/"
HOME="/root"
Also, examine the kubelet kubeconfig configuration (it should be at /var/lib/kubelet/kubeconfig
). This is the config which tells the kubelet which API server to register with. Ensure that exists on the on-prem node
This should get your node joining the API. You may have to go through some debugging as you go through this process.
I really don't recommend doing this though, for the following reasons:
- Unless you use node-labels in a sane way, you're going to have issues provisioning cloud elements. The kubelet will interact with the AWS API regularly, so if you try use service of type LoadBalancer or any cloud volumes, you'll need to pin the workloads to specific nodes. You'll need to make heavy uses of taints and tolerations.
- Kubernetes workers aren't designed to connect over a WAN. You're probably going to see issue at some point with network latency etc
- If you do choose to go down this route, you'll need to ensure you have TLS configured in both directions for the API <-> kubelet communication, or a VPN.
来源:https://stackoverflow.com/questions/51427806/kubernetes-combining-a-kops-cluster-to-an-on-premise-kubeadm-cluster