使用kubeadm在CentOS上搭建Kubernetes集群(多主)

匿名 (未验证) 提交于 2019-12-02 23:43:01

环境说明:
| 主机名称 | IP地址 | 部署软件 | 备注 |
| -------- | -------------- | ------------------------------------- | ------ |
| M-kube12 | 192.168.10.12 | master+etcd+docker+keepalived+haproxy | master |
| M-kube13 | 192.168.10.13 | master+etcd+docker+keepalived+haproxy | master |
| M-kube14 | 192.168.10.14 | master+etcd+docker+keepalived+haproxy | master |
| N-kube15 | 192.168.10.15 | docker+node | node |
| N-kube16 | 192.168.10.16 | docker+node | node |
| VIP | 192.168.10.100 | | VIP |

# 1、关闭防火墙,SELinux,安装基础包 yum install -y net-tools conntrack-tools wget vim  ntpdate libseccomp libtool-ltdl lrzsz        #在所有的机器上执行,安装基本命令  systemctl stop firewalld && systemctl disable firewalld     #执行关闭防火墙和SELinux  sestatus    #查看selinux状态 setenforce 0        #临时关闭selinux sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config  swapoff -a          #关闭swap sed -i 's/.*swap.*/#&/' /etc/fstab  # 2、设置免密登陆 ssh-keygen -t rsa       #配置免密登陆 ssh-copy-id <ip地址>      #拷贝密钥  # 3、更改国内yum源 mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.$(date +%Y%m%d) wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.cloud.tencent.com/repo/centos7_base.repo wget -O /etc/yum.repos.d/epel.repo http://mirrors.cloud.tencent.com/repo/epel-7.repo #docker源 wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo  #配置国内Kubernetes源 cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF yum clean all && yum makecache -y  #---------------------- [root@localhost ~]#  cat >> /etc/yum.repos.d/kubernetes.repo << EOF [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=0 EOF  # 4、配置内核参数,将桥接的IPv4流量传递到IPtables链 cat <<EOF >  /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_nonlocal_bind = 1 net.ipv4.ip_forward = 1 vm.swappiness=0 EOF sysctl --system  # 5.配置文件描述数 echo "* soft nofile 65536" >> /etc/security/limits.conf echo "* hard nofile 65536" >> /etc/security/limits.conf echo "* soft nproc 65536"  >> /etc/security/limits.conf echo "* hard nproc 65536"  >> /etc/security/limits.conf echo "* soft  memlock  unlimited"  >> /etc/security/limits.conf echo "* hard memlock  unlimited"  >> /etc/security/limits.conf  # 6.加载IPVS模块 yum install ipset ipvsadm -y cat > /etc/sysconfig/modules/ipvs.modules <<EOF #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 EOF #执行脚本 chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4   #参考别人的 cat << EOF > /etc/sysconfig/modules/ipvs.modules  #!/bin/bash ipvs_modules_dir="/usr/lib/modules/\`uname -r\`/kernel/net/netfilter/ipvs" for i in \`ls \$ipvs_modules_dir | sed  -r 's#(.*).ko.*#\1#'\`; do     /sbin/modinfo -F filename \$i  &> /dev/null     if [ \$? -eq 0 ]; then         /sbin/modprobe \$i     fi done EOF  chmod +x /etc/sysconfig/modules/ipvs.modules  bash /etc/sysconfig/modules/ipvs.modules
yum install -y keepalived  #10.12机器上配置  cat <<EOF > /etc/keepalived/keepalived.conf global_defs {    router_id LVS_k8s }  vrrp_script CheckK8sMaster {     script "curl -k https://192.168.10.100:6443"     interval 3     timeout 9     fall 2     rise 2 }  vrrp_instance VI_1 {     state MASTER     interface ens33     virtual_router_id 100     priority 100     advert_int 1     mcast_src_ip 192.168.10.12     nopreempt     authentication {         auth_type PASS         auth_pass fana123     }     unicast_peer {         192.168.10.13         192.168.10.14     }     virtual_ipaddress {         192.168.10.100/24     }     track_script {         CheckK8sMaster     }  } EOF  #13机器keepalived配置 cat <<EOF > /etc/keepalived/keepalived.conf global_defs {    router_id LVS_k8s }  vrrp_script CheckK8sMaster {     script "curl -k https://192.168.10.100:6443"     interval 3     timeout 9     fall 2     rise 2 }  vrrp_instance VI_1 {     state BACKUP     interface ens33     virtual_router_id 100     priority 90     advert_int 1     mcast_src_ip 192.168.10.13     nopreempt     authentication {         auth_type PASS         auth_pass fana123     }     unicast_peer {         192.168.10.12         192.168.10.14     }     virtual_ipaddress {         192.168.10.100/24     }     track_script {         CheckK8sMaster     } } EOF  #14机器上keepalived配置 cat <<EOF > /etc/keepalived/keepalived.conf global_defs {    router_id LVS_k8s }  vrrp_script CheckK8sMaster {     script "curl -k https://192.168.10.100:6443"     interval 3     timeout 9     fall 2     rise 2 }  vrrp_instance VI_1 {     state BACKUP     interface ens33     virtual_router_id 100     priority 80     advert_int 1     mcast_src_ip 192.168.10.14     nopreempt     authentication {         auth_type PASS         auth_pass fana123     }     unicast_peer {         192.168.10.12         192.168.10.13     }     virtual_ipaddress {         192.168.10.100/24     }     track_script {         CheckK8sMaster     }  } EOF  #启动keepalived systemctl restart keepalived && systemctl enable keepalived
yum install -y haproxy  #13机器上配置 cat << EOF > /etc/haproxy/haproxy.cfg global     log         127.0.0.1 local2     chroot      /var/lib/haproxy     pidfile     /var/run/haproxy.pid     maxconn     4000     user        haproxy     group       haproxy     daemon  defaults     mode                    tcp     log                     global     retries                 3     timeout connect         10s     timeout client          1m     timeout server          1m  frontend kubernetes     bind *:6443     mode tcp     default_backend kubernetes-master  backend kubernetes-master     balance roundrobin     server M-kube12 192.168.10.12:6443 check maxconn 2000     server M-kube13 192.168.10.13:6443 check maxconn 2000     server M-kube14 192.168.10.14:6443 check maxconn 2000 EOF  #12,13,和 14机器上配置都一样  # 启动haproxy systemctl enable haproxy && systemctl start haproxy

也可以用容器的方式部署

# haproxy启动脚本 mkdir -p /data/lb cat > /data/lb/start-haproxy.sh << "EOF" #!/bin/bash MasterIP1=192.168.10.12 MasterIP2=192.168.10.13 MasterIP3=192.168.10.14 MasterPort=6443  docker run -d --restart=always --name HAProxy-K8S -p 6444:6444 \         -e MasterIP1=$MasterIP1 \         -e MasterIP2=$MasterIP2 \         -e MasterIP3=$MasterIP3 \         -e MasterPort=$MasterPort \         wise2c/haproxy-k8s EOF  #keepalived启动脚本 cat > /data/lb/start-keepalived.sh << "EOF" #!/bin/bash VIRTUAL_IP=192.168.10.100 INTERFACE=ens33 NETMASK_BIT=24 CHECK_PORT=6444 RID=10 VRID=160 MCAST_GROUP=224.0.0.18  docker run -itd --restart=always --name=Keepalived-K8S \         --net=host --cap-add=NET_ADMIN \         -e VIRTUAL_IP=$VIRTUAL_IP \         -e INTERFACE=$INTERFACE \         -e CHECK_PORT=$CHECK_PORT \         -e RID=$RID \         -e VRID=$VRID \         -e NETMASK_BIT=$NETMASK_BIT \         -e MCAST_GROUP=$MCAST_GROUP \         wise2c/keepalived-k8s EOF  #把脚本拷贝到13和14机器上,然后启动 sh /data/lb/start-haproxy.sh && sh /data/lb/start-keepalived.sh  docker ps #可以看到容器的启动状态,相关配置文件可以进入容器查看

14.1、在10.12机器上配置etcd证书

#下载cfssl包 wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 #设置cfssl环境 chmod +x cfssl* mv cfssl_linux-amd64 /usr/local/bin/cfssl mv cfssljson_linux-amd64 /usr/local/bin/cfssljson mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo export PATH=/usr/local/bin:$PATH  #配置CA文件(IP地址为etc节点的IP) mkdir /root/ssl cd /root/ssl  cat >  ca-config.json <<EOF { "signing": { "default": {   "expiry": "8760h" }, "profiles": {   "kubernetes-Soulmate": {     "usages": [         "signing",         "key encipherment",         "server auth",         "client auth"     ],     "expiry": "8760h"   } } } } EOF  #--------------------------------------------------------#  cat >  ca-csr.json <<EOF { "CN": "kubernetes-Soulmate", "key": { "algo": "rsa", "size": 2048 }, "names": [ {   "C": "CN",   "ST": "shanghai",   "L": "shanghai",   "O": "k8s",   "OU": "System" } ] } EOF  #--------------------------------------------------------#  cat > etcd-csr.json <<EOF {   "CN": "etcd",   "hosts": [     "127.0.0.1",     "192.168.10.12",     "192.168.10.13",     "192.168.10.14"   ],   "key": {     "algo": "rsa",     "size": 2048   },   "names": [     {       "C": "CN",       "ST": "shanghai",       "L": "shanghai",       "O": "k8s",       "OU": "System"     }   ] } EOF  #--------------------------------------------------------# cfssl gencert -initca ca-csr.json | cfssljson -bare ca  cfssl gencert -ca=ca.pem \   -ca-key=ca-key.pem \   -config=ca-config.json \   -profile=kubernetes-Soulmate etcd-csr.json | cfssljson -bare etcd    #将10.13的etcd证书分发到14,15机器上  mkdir -p /etc/etcd/ssl cp *.pem /etc/etcd/ssl/  ssh -n 192.168.10.13 "mkdir -p /etc/etcd/ssl && exit" ssh -n 192.168.10.14 "mkdir -p /etc/etcd/ssl && exit"  scp -r /etc/etcd/ssl/*.pem 192.168.10.13:/etc/etcd/ssl/ scp -r /etc/etcd/ssl/*.pem 192.168.10.14:/etc/etcd/ssl/

1.4.2、在3台主节点上操作,安装etcd

yum install etcd -y mkdir -p /var/lib/etcd
#10.12机器上操作 cat <<EOF >/etc/systemd/system/etcd.service [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target Documentation=https://github.com/coreos  [Service] Type=notify WorkingDirectory=/var/lib/etcd/ ExecStart=/usr/bin/etcd \   --name M-kube12 \   --cert-file=/etc/etcd/ssl/etcd.pem \   --key-file=/etc/etcd/ssl/etcd-key.pem \   --trusted-ca-file=/etc/etcd/ssl/ca.pem \   --peer-cert-file=/etc/etcd/ssl/etcd.pem \   --peer-key-file=/etc/etcd/ssl/etcd-key.pem \   --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \   --initial-advertise-peer-urls https://192.168.10.12:2380 \   --listen-peer-urls https://192.168.10.12:2380 \   --listen-client-urls https://192.168.10.12:2379,http://127.0.0.1:2379 \   --advertise-client-urls https://192.168.10.12:2379 \   --initial-cluster-token etcd-cluster-0 \   --initial-cluster M-kube12=https://192.168.10.12:2380,M-kube13=https://192.168.10.13:2380,M-kube14=https://192.168.10.14:2380 \   --initial-cluster-state new \   --data-dir=/var/lib/etcd Restart=on-failure RestartSec=5 LimitNOFILE=65536  [Install] WantedBy=multi-user.target EOF
#10.13上机器操作 cat <<EOF >/etc/systemd/system/etcd.service [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target Documentation=https://github.com/coreos  [Service] Type=notify WorkingDirectory=/var/lib/etcd/ ExecStart=/usr/bin/etcd \   --name M-kube13 \   --cert-file=/etc/etcd/ssl/etcd.pem \   --key-file=/etc/etcd/ssl/etcd-key.pem \   --peer-cert-file=/etc/etcd/ssl/etcd.pem \   --peer-key-file=/etc/etcd/ssl/etcd-key.pem \   --trusted-ca-file=/etc/etcd/ssl/ca.pem \   --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \   --initial-advertise-peer-urls https://192.168.10.13:2380 \   --listen-peer-urls https://192.168.10.13:2380 \   --listen-client-urls https://192.168.10.13:2379,http://127.0.0.1:2379 \   --advertise-client-urls https://192.168.10.13:2379 \   --initial-cluster-token etcd-cluster-0 \   --initial-cluster M-kube12=https://192.168.10.12:2380,M-kube13=https://192.168.10.13:2380,M-kube14=https://192.168.10.14:2380 \   --initial-cluster-state new \   --data-dir=/var/lib/etcd Restart=on-failure RestartSec=5 LimitNOFILE=65536  [Install] WantedBy=multi-user.target EOF
#10.15机器上操作 cat <<EOF >/etc/systemd/system/etcd.service [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target Documentation=https://github.com/coreos  [Service] Type=notify WorkingDirectory=/var/lib/etcd/ ExecStart=/usr/bin/etcd \   --name M-kube14 \   --cert-file=/etc/etcd/ssl/etcd.pem \   --key-file=/etc/etcd/ssl/etcd-key.pem \   --peer-cert-file=/etc/etcd/ssl/etcd.pem \   --peer-key-file=/etc/etcd/ssl/etcd-key.pem \   --trusted-ca-file=/etc/etcd/ssl/ca.pem \   --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \   --initial-advertise-peer-urls https://192.168.10.14:2380 \   --listen-peer-urls https://192.168.10.14:2380 \   --listen-client-urls https://192.168.10.14:2379,http://127.0.0.1:2379 \   --advertise-client-urls https://192.168.10.14:2379 \   --initial-cluster-token etcd-cluster-0 \   --initial-cluster M-kube12=https://192.168.10.12:2380,M-kube13=https://192.168.10.13:2380,M-kube14=https://192.168.10.14:2380 \   --initial-cluster-state new \   --data-dir=/var/lib/etcd Restart=on-failure RestartSec=5 LimitNOFILE=65536  [Install] WantedBy=multi-user.target EOF
#添加自启动 cp /etc/systemd/system/etcd.service /usr/lib/systemd/system/ systemctl daemon-reload && systemctl start etcd && systemctl enable etcd && systemctl status etcd   #在etc节点上检查 etcdctl --endpoints=https://192.168.10.12:2379,https://192.168.10.13:2379,https://192.168.10.14:2379 \  --ca-file=/etc/etcd/ssl/ca.pem \  --cert-file=/etc/etcd/ssl/etcd.pem \  --key-file=/etc/etcd/ssl/etcd-key.pem  cluster-health  #正常的话会有如下提示 [root@M-kube13 ~]# etcdctl --endpoints=https://192.168.10.12:2379,https://192.168.10.13:2379,https://192.168.10.14:2379 \ >  --ca-file=/etc/etcd/ssl/ca.pem \ >  --cert-file=/etc/etcd/ssl/etcd.pem \ >  --key-file=/etc/etcd/ssl/etcd-key.pem  cluster-health member 1af68d968c7e3f22 is healthy: got healthy result from https://192.168.10.12:2379 member 55204c19ed228077 is healthy: got healthy result from https://192.168.10.14:2379 member e8d9a97b17f26476 is healthy: got healthy result from https://192.168.10.13:2379

如今Docker分为了Docker-CE和Docker-EE两个版本,CE为社区版即免费版,EE为企业版即商业版。我们选择使用CE版。

在所有的机器上安装docker

yum安装docker

#1.安装yum源工具包 yum install -y yum-utils device-mapper-persistent-data lvm2  #2.下载docker-ce官方的yum源配置文件,上面操作了 这里就不操作了 # yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo # yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo  #3.禁用docker-c-edge源配edge是不开发版,不稳定,下载stable版 yum-config-manager --disable docker-ce-edge #4.更新本地YUM源缓存 yum makecache fast #5.安装Docker-ce相应版本 yum -y install docker-ce #6.设置开机自启动 systemctl restart docker && systemctl enable docker && systemctl status docker

运行hello world验证

[root@localhost ~]# docker run hello-world Unable to find image 'hello-world:latest' locally latest: Pulling from library/hello-world 9a0669468bf7: Pull complete Digest: sha256:0e06ef5e1945a718b02a8c319e15bae44f47039005530bc617a5d071190ed3fc Status: Downloaded newer image for hello-world:latest  Hello from Docker! This message shows that your installation appears to be working correctly.  To generate this message, Docker took the following steps: 1. The Docker client contacted the Docker daemon. 2. The Docker daemon pulled the "hello-world" image from the Docker Hub. 3. The Docker daemon created a new container from that image which runs the    executable that produces the output you are currently reading. 4. The Docker daemon streamed that output to the Docker client, which sent it    to your terminal.  To try something more ambitious, you can run an Ubuntu container with: $ docker run -it ubuntu bash  Share images, automate workflows, and more with a free Docker ID: https://cloud.docker.com/  For more examples and ideas, visit: https://docs.docker.com/engine/userguide/

使用DaoCloud加速器(可以跳过这一步)

curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://0d236e3f.m.daocloud.io # docker version >= 1.12 # {"registry-mirrors": ["http://0d236e3f.m.daocloud.io"]} # Success. # You need to restart docker to take effect: sudo systemctl restart docker systemctl restart docker

在所有机器安装kubectl kubelet kubeadm kubernetes-cni

yum list kubectl kubelet kubeadm kubernetes-cni     #查看可安装的包 已加载插件:fastestmirror Loading mirror speeds from cached hostfile * base: mirrors.tuna.tsinghua.edu.cn * extras: mirrors.sohu.com * updates: mirrors.sohu.com #显示可安装的软件包 kubeadm.x86_64                                    1.14.3-0                                              kubernetes kubectl.x86_64                                    1.14.3-0                                             kubernetes kubelet.x86_64                                    1.14.3-0                                              kubernetes kubernetes-cni.x86_64                             0.7.5-0                                              kubernetes [root@localhost ~]#  #然后安装kubectl kubelet kubeadm kubernetes-cni yum install -y kubectl kubelet kubeadm kubernetes-cni  # Kubelet负责与其他节点集群通信,并进行本节点Pod和容器生命周期的管理。 # Kubeadm是Kubernetes的自动化部署工具,降低了部署难度,提高效率。 # Kubectl是Kubernetes集群管理工具

修改kubelet配置文件(可不操作)

vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf    #或者在如下目录可不操作 /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf # 修改一行 Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs" # 添加一行 Environment="KUBELET_EXTRA_ARGS=--v=2 --fail-swap-on=false --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/k8sth/pause-amd64:3.0" #重新加载配置 systemctl daemon-reload        #1.命令补全 yum install -y bash-completion source /usr/share/bash-completion/bash_completion source <(kubectl completion bash) echo "source <(kubectl completion bash)" >> ~/.bashrc #启动所有主机上的kubelet服务 systemctl enable kubelet && systemctl start kubelet     

kubeadm init主要执行了以下操作:















1.7.1、在10.12 机器上添加集群初始化配置文件

kubeadm config print init-defaults > kubeadm-config.yaml    #这个命令可以生成初始化配置文件也可以自己写  # 1.创建初始化集群配置文件, cat <<EOF > /etc/kubernetes/kubeadm-master.config apiVersion: kubeadm.k8s.io/v1beta1 kind: ClusterConfiguration kubernetesVersion: v1.14.3 controlPlaneEndpoint: "192.168.10.100:6443" imageRepository: registry.aliyuncs.com/google_containers   apiServer:   certSANs:   - 192.168.10.12   - 192.168.10.13   - 192.168.10.14   - 192.168.10.100 etcd:   external:     endpoints:     - https://192.168.10.12:2379     - https://192.168.10.13:2379     - https://192.168.10.14:2379     caFile: /etc/etcd/ssl/ca.pem     certFile: /etc/etcd/ssl/etcd.pem     keyFile: /etc/etcd/ssl/etcd-key.pem networking:   podSubnet: 10.244.0.0/16 --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: ipvs EOF  #2.然后执行 kubeadm config images pull --config kubeadm-master.config   #可以先执行这个提前下载镜像 kubeadm init --config kubeadm-master.config --experimental-upload-certs | tee kubeadm-init.log # 追加tee命令可以将初始化日志输出到kubeadm-init.log中,添加--experimental-upload-certs参数可以在后续执行加入节点时自动分发证书文件。  #3.初始化失败后处理方法 kubeadm reset       #初始化失败或者成功,都可以直接执行kubeadm reset命令清理集群或节点 #或 rm -rf /etc/kubernetes/*.conf rm -rf /etc/kubernetes/manifests/*.yaml docker ps -a |awk '{print $1}' |xargs docker rm -f systemctl  stop kubelet  #初始化正常的结果如下 Your Kubernetes control-plane has initialized successfully!  To start using your cluster, you need to run the following as a regular user:    mkdir -p $HOME/.kube   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config   sudo chown $(id -u):$(id -g) $HOME/.kube/config  You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:   https://kubernetes.io/docs/concepts/cluster-administration/addons/  You can now join any number of the control-plane node running the following command on each as root:    kubeadm join 192.168.10.100:6443 --token y6v90q.i6bl1bwcgg8clvh5 \     --discovery-token-ca-cert-hash sha256:179c5689ef32be2123c9f02015ef25176d177c54322500665f1170f26368ae3d \     --experimental-control-plane --certificate-key 3044cb04c999706795b28c1d3dcd2305dcf181787d7c6537284341a985395c20  Please note that the certificate-key gives access to cluster sensitive data, keep it secret! As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use  "kubeadm init phase upload-certs --experimental-upload-certs" to reload certs afterward.  Then you can join any number of worker nodes by running the following on each as root:  kubeadm join 192.168.10.100:6443 --token y6v90q.i6bl1bwcgg8clvh5 \     --discovery-token-ca-cert-hash sha256:179c5689ef32be2123c9f02015ef25176d177c54322500665f1170f26368ae3d       #5.然后拷贝文件 mkdir -p /root/.kube cp -i /etc/kubernetes/admin.conf /root/.kube/config chown $(id -u):$(id -g) /root/.kube/config      #如果是其他用户需要使用kubectl命令,需要拷贝到$HOME目录,然后赋权 

1.7.2、查看当前状态

[root@M-kube12 kubernetes]# kubectl get node NAME       STATUS     ROLES    AGE     VERSION m-kube12   NotReady   master   3m40s   v1.14.3      # STATUS显示的状态还是不可用  [root@M-kube12 kubernetes]# kubectl -n kube-system get pod NAME                               READY   STATUS    RESTARTS   AGE coredns-8686dcc4fd-fmlsh           0/1     Pending   0          3m40s coredns-8686dcc4fd-m22j7           0/1     Pending   0          3m40s etcd-m-kube12                      1/1     Running   0          2m59s kube-apiserver-m-kube12            1/1     Running   0          2m53s kube-controller-manager-m-kube12   1/1     Running   0          2m33s kube-proxy-4kg8d                   1/1     Running   0          3m40s kube-scheduler-m-kube12            1/1     Running   0          2m45s  [root@M-kube12 kubernetes]# kubectl get cs NAME                 STATUS    MESSAGE             ERROR controller-manager   Healthy   ok                   scheduler            Healthy   ok                   etcd-0               Healthy   {"health":"true"} 

1.7.3、部署flannel网络,在12机器上执行

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml #版本信息:quay.io/coreos/flannel:v0.10.0-amd64  cat kube-flannel.yml | grep image cat kube-flannel.yml | grep 10.244 sed -i 's#quay.io/coreos/flannel:v0.11.0-amd64#willdockerhub/flannel:v0.11.0-amd64#g' kube-flannel.yml kubectl apply -f kube-flannel.yml #或者 kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml   #等待一会 查看 node和pod 状态全部为Running [root@M-fana3 kubernetes]# kubectl get node               NAME      STATUS   ROLES    AGE   VERSION m-fana3   Ready    master   42m   v1.14.3       #状态正常了 [root@M-fana3 kubernetes]# kubectl -n kube-system get pod NAME                              READY   STATUS    RESTARTS   AGE coredns-8686dcc4fd-2z6m2          1/1     Running   0          42m coredns-8686dcc4fd-4k7mm          1/1     Running   0          42m etcd-m-fana3                      1/1     Running   0          41m kube-apiserver-m-fana3            1/1     Running   0          41m kube-controller-manager-m-fana3   1/1     Running   0          41m kube-flannel-ds-amd64-6zrzt       1/1     Running   0          109s kube-proxy-lc8d5                  1/1     Running   0          42m kube-scheduler-m-fana3            1/1     Running   0          41m  #如果遇到问题想如下情况,有可能镜像拉取失败了, kubectl -n kube-system get pod                                           NAME                               READY   STATUS                  RESTARTS   AGE coredns-8686dcc4fd-c9mw7           0/1     Pending                 0          43m coredns-8686dcc4fd-l8fpm           0/1     Pending                 0          43m kube-apiserver-m-kube12            1/1     Running                 0          42m kube-controller-manager-m-kube12   1/1     Running                 0          17m kube-flannel-ds-amd64-gcmmp        0/1     Init:ImagePullBackOff   0          11m kube-proxy-czzk7                   1/1     Running                 0          43m kube-scheduler-m-kube12            1/1     Running                 0          42m  #可以通过 kubectl describe pod kube-flannel-ds-amd64-gcmmp --namespace=kube-system 查看pod状态,看到最后报错如下,可以手动下载或者二进制安装 Node-Selectors:  beta.kubernetes.io/arch=amd64 Tolerations:     :NoSchedule                  node.kubernetes.io/disk-pressure:NoSchedule                  node.kubernetes.io/memory-pressure:NoSchedule                  node.kubernetes.io/network-unavailable:NoSchedule                  node.kubernetes.io/not-ready:NoExecute                  node.kubernetes.io/pid-pressure:NoSchedule                  node.kubernetes.io/unreachable:NoExecute                  node.kubernetes.io/unschedulable:NoSchedule Events:   Type     Reason          Age                    From               Message   ----     ------          ----                   ----               -------   Normal   Scheduled       11m                    default-scheduler  Successfully assigned kube-system/kube-flannel-ds-amd64-gcmmp to m-kube12   Normal   Pulling         11m                    kubelet, m-kube12  Pulling image "willdockerhub/flannel:v0.11.0-amd64"   Warning  FailedMount     7m27s                  kubelet, m-kube12  MountVolume.SetUp failed for volume "flannel-token-6g9n7" : couldn't propagate object cache: timed out waiting for the condition   Warning  FailedMount     7m27s                  kubelet, m-kube12  MountVolume.SetUp failed for volume "flannel-cfg" : couldn't propagate object cache: timed out waiting for the condition   Warning  Failed          4m21s                  kubelet, m-kube12  Failed to pull image "willdockerhub/flannel:v0.11.0-amd64": rpc error: code = Unknown desc = context canceled   Warning  Failed          3m53s                  kubelet, m-kube12  Failed to pull image "willdockerhub/flannel:v0.11.0-amd64": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled (Client.Timeout exceeded while awaiting headers)   Warning  Failed          3m16s                  kubelet, m-kube12  Failed to pull image "willdockerhub/flannel:v0.11.0-amd64": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: TLS handshake timeout   Warning  Failed          3m16s (x3 over 4m21s)  kubelet, m-kube12  Error: ErrImagePull   Normal   SandboxChanged  3m14s                  kubelet, m-kube12  Pod sandbox changed, it will be killed and re-created.   Normal   BackOff         2m47s (x6 over 4m21s)  kubelet, m-kube12  Back-off pulling image "willdockerhub/flannel:v0.11.0-amd64"   Warning  Failed          2m47s (x6 over 4m21s)  kubelet, m-kube12  Error: ImagePullBackOff   Normal   Pulling         2m33s (x4 over 7m26s)  kubelet, m-kube12  Pulling image "willdockerhub/flannel:v0.11.0-amd64"  

在13和14机器上操作加入集群

#执行加入集群命令 kubeadm join 192.168.10.100:6443 --token y6v90q.i6bl1bwcgg8clvh5 \     --discovery-token-ca-cert-hash sha256:179c5689ef32be2123c9f02015ef25176d177c54322500665f1170f26368ae3d \     --experimental-control-plane --certificate-key 3044cb04c999706795b28c1d3dcd2305dcf181787d7c6537284341a985395c20 # 拷贝kube到用户目录 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config  # 验证机器状态 kubectl -n kube-system get pod -o wide  #查看pod运行情况  kubectl get nodes -o wide #查看节点情况  kubectl -n kube-system get svc  #查看service NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   16m
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!