kubeadm方式安装kubernetes

Deadly 提交于 2020-11-26 03:37:19

一、kubenetes搭建方式有三种:

        1、minikube (通常在测试环境使用,不要在生产环境中使用)

        2、kubeadm (是一种快速部署kubernetes的方式,部署相对简单,可以在生产环境中应用)

        3、二进制方式安装kubernetes (安装过程复杂,比较容易踩坑)

二、使用kubeadm方式安装kubernetes:

        1、环境:

                         IP地址                                                     主机名                        
192.168.1.100 k8s-master
192.168.1.101 k8s-node1

 

 

 

虚拟机配置:操作系统:CentOS7.5

      CPU最好2核心数以上

      内存最好2GB以上

        2、部署前的条件

    2.1、关闭防火墙:

1 systemctl disable firewalld #关闭防火墙开机自启
2 systemctl stop firewalld    #关闭防火墙

    2.2、关闭selinux

1 setenforce o  #暂时关闭selinux

     2.3、关闭swap

1 swapoff -a

    2.4、创建k8s配置文件

1 vi /etc/sysctl.d/k8s.conf
2 
3 net.bridge.bridge-nf-call-ip6tables = 1
4 net.bridge.bridge-nf-call-iptables = 1
5 net.ipv4.ip_forward = 1
6 #使配置文件生效
7 modprobe br_netfilter      
8 sysctl -p /etc/sysctl.d/k8s.conf  

        3、安装docker-ce

 1 yum install -y yum-utils device-mapper-persistent-data lvm2    #安装docker依赖包
 2 yum-config-manager     --add-repo     https://download.docker.com/linux/centos/docker-ce.repo   #添加docker的yum仓库repo源
 3 yum list docker-ce.x86_64  --showduplicates |sort -r    #查看docker-ce的各个版本
4
5 * updates: mirrors.huaweicloud.com 6 Loading mirror speeds from cached hostfile 7 Loaded plugins: fastestmirror 8 Installed Packages 9 * extras: mirrors.huaweicloud.com 10 docker-ce.x86_64 3:18.09.3-3.el7 docker-ce-stable 11 docker-ce.x86_64 3:18.09.2-3.el7 docker-ce-stable 12 docker-ce.x86_64 3:18.09.1-3.el7 docker-ce-stable 13 docker-ce.x86_64 3:18.09.0-3.el7 docker-ce-stable 14 docker-ce.x86_64 18.06.3.ce-3.el7 docker-ce-stable 15 docker-ce.x86_64 18.06.2.ce-3.el7 docker-ce-stable 16 docker-ce.x86_64 18.06.1.ce-3.el7 docker-ce-stable 17 docker-ce.x86_64 18.06.1.ce-3.el7 @docker-ce-stable 18 docker-ce.x86_64 18.06.0.ce-3.el7 docker-ce-stable 19 docker-ce.x86_64 18.03.1.ce-1.el7.centos docker-ce-stable 20 docker-ce.x86_64 18.03.0.ce-1.el7.centos docker-ce-stable 21 docker-ce.x86_64 17.12.1.ce-1.el7.centos docker-ce-stable 22 docker-ce.x86_64 17.12.0.ce-1.el7.centos docker-ce-stable 23 docker-ce.x86_64 17.09.1.ce-1.el7.centos docker-ce-stable 24 docker-ce.x86_64 17.09.0.ce-1.el7.centos docker-ce-stable 25 docker-ce.x86_64 17.06.2.ce-1.el7.centos docker-ce-stable 26 docker-ce.x86_64 17.06.1.ce-1.el7.centos docker-ce-stable 27 docker-ce.x86_64 17.06.0.ce-1.el7.centos docker-ce-stable 28 docker-ce.x86_64 17.03.3.ce-1.el7 docker-ce-stable 29 docker-ce.x86_64 17.03.2.ce-1.el7.centos docker-ce-stable 30 docker-ce.x86_64 17.03.1.ce-1.el7.centos docker-ce-stable 31 docker-ce.x86_64 17.03.0.ce-1.el7.centos docker-ce-stable 32 * base: mirrors.huaweicloud.com 33 Available Packages 34 35 36 yum makecache fast #快速建立缓存 37 yum install -y --setopt=obsoletes=0 docker-ce-18.06.1.ce-3.el7 #安装docker 38 systemctl start docker #启动docker 39 systemctl enable docker #将docker加入开机自启

 

        4、创建kubernetes的repo源并安装kubernetes

 1 #创建kubernetes的repo源
 2 cat > /etc/yum.repos.d/k8s.repo <<EOF
 3       [kubernetes]
 4       name=Kubernetes
 5       baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
 6       enabled=1
 7       gpgcheck=0
 8       EOF
 9 #快速建立缓存
10 yum makecache fast
11 #安装kubernetes各组件      
12 yum install -y kubelet kubeadm kubectl
13 #编辑配置文件
14 vi /etc/sysconfig/kubelet 
15 KUBELET_EXTRA_ARGS=--fail-swap-on=false   #修改启动kubernetes时必须关闭swap的规则
16 
17 #将kubelet加入开机自启
18 systemctl enable kubelet.servic

 

    至此以上操作在k8s-master和k8s-node1两台机器上执行

   至此以下均在k8s-master上执行

    4.1、修改kubernetes组件的配置

 1 #使用kubeadm获取默认配置文件kubeadm.conf
 2 kubeadm config print-default > kubeadm.conf 
 3 
 4 #修改kubeadm初始化时pull得镜像的网站,默认为google的镜像网站,由于国内需酸酸乳才可访问,故不能酸酸乳的小伙伴要按下面命令将其改为阿里云的镜像仓库网址
 5 
 6 sed -i "s/imageRepository: .*/imageRepository: registry.aliyuncs.com\/google_containers/g" kubeadm.conf
 7 
 8 #指定kubernetes的版本
 9 sed -i "s/kubernetesVersion: .*/kubernetesVersion: v1.13.4/g" kubeadm.conf
10 
11 #使用kubeadm获取初始化时所需的各组件镜像
12 kubeadm config images pull --config kubeadm.conf
13 
14 #指定master节点IP
15 sed -i "s/advertiseAddress: .*/advertiseAddress: 192.168.1.100/g" kubeadm.conf
16 #指定pod容器IP网段
17 sed -i "s/podSubnet: .*/podSubnet: \"10.244.0.0\/16\"/g" kubeadm.conf
18 #初始化kubeadm
19 kubeadm init --config kubeadm.conf
20 
21 #若提示必须关闭swap的告警时,运行下面命令,以忽略告警
22 kubeadm init --config kubeadm.conf --ignore-preflight-errors=Swap

#以下是初始化成功后输出的结果:
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join 192.168.1.100:6443 --token lb7b9b.mnb0oe0su1rtemnm --discovery-token-ca-cert-hash sha256:2ec77ce65a291770f6fcf42b60fc5b2200a8a381d46ce2b1bf7ec73310a95727
注:以上kubeadm join... 这条信息很重要,以后其他节点都是用这条命令才能加入这个集群
如果忘记请用以下命令查看:kubeadm token create --print-join-command
初始化成功后不要忘记执行提示的命令:
mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
如果初始化成功后虚拟机卡顿运行缓慢,需要给虚拟机内存适量调高

 

    4.2、查看集群是否健康

1 kubectl get cs
2 NAME                 STATUS    MESSAGE              ERROR
3 scheduler            Healthy   ok                   
4 controller-manager   Healthy   ok                   
5 etcd-0               Healthy   {"health": "true"}

 

        5、配置flannel网络

 1 mkdir -p ~/k8s/
 2 cd ~/k8s
 3 
 4 #获取flannel的yml文件
 5 wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
 6 
 7 #执行flannel的yml文件使之运行
 8 kubectl apply -f  kube-flannel.yml
 9 
10 #查看flannel的运行状态
11 kubectl get ds -l app=flannel -n kube-system
12 NAME                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                     AGE
13 kube-flannel-ds-amd64     2         2         2       1            2           beta.kubernetes.io/arch=amd64     22h
14 kube-flannel-ds-arm       0         0         0       0            0           beta.kubernetes.io/arch=arm       22h
15 kube-flannel-ds-arm64     0         0         0       0            0           beta.kubernetes.io/arch=arm64     22h
16 kube-flannel-ds-ppc64le   0         0         0       0            0           beta.kubernetes.io/arch=ppc64le   22h
17 kube-flannel-ds-s390x     0         0         0       0            0           beta.kubernetes.io/arch=s390x     22h
18 #查看节点状态
19 kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   NotReady    master   23h   v1.13.4

#我们看到master节点的状态并不是Ready状态
#是因为kubeadm额外给node1节点设置了一个污点(Taint):node.kubernetes.io/not-ready:NoSchedule,很容易理解
#即如果节点还没有ready之前,是不接受调度的。
#可是如果Kubernetes的网络插件还没有部署的话,节点是不会进入ready状态的。
#因此我们修改以下kube-flannel.yaml的内容,加入对node.kubernetes.io/not-ready:NoSchedule这个污点的容忍:

vi ~/k8s/kube-flannel.yaml
#修改段:
tolerations:
     - key: node-role.kubernetes.io/master
      operator: Exists
      effect: NoSchedule
     - key: node.kubernetes.io/not-ready
      operator: Exists
      effect: NoSchedule

重新apply一下kubectl apply -f kube-flannel.yml,这次成功完成flannel的部署了。
节点状态就是Ready状态了。

 

 

        6、使用kubectl get pod –all-namespaces -o wide确保所有的Pod都处于Running状态。

 1 kubectl get pod --all-namespaces -o wide
 2 NAMESPACE     NAME                            READY   STATUS    RESTARTS   AGE     IP              NODE    NOMINATED NODE
 3 kube-system   coredns-576cbf47c7-njt7l        1/1     Running   0          12m    10.244.0.3      node1   <none>
 4 kube-system   coredns-576cbf47c7-vg2gd        1/1     Running   0          12m    10.244.0.2      node1   <none>
 5 kube-system   etcd-node1                      1/1     Running   0          12m    192.168.61.11   node1   <none>
 6 kube-system   kube-apiserver-node1            1/1     Running   0          12m    192.168.61.11   node1   <none>
 7 kube-system   kube-controller-manager-node1   1/1     Running   0          12m    192.168.61.11   node1   <none>
 8 kube-system   kube-flannel-ds-amd64-bxtqh     1/1     Running   0          2m     192.168.61.11   node1   <none>
 9 kube-system   kube-proxy-fb542                1/1     Running   0          12m    192.168.61.11   node1   <none>
10 kube-system   kube-scheduler-node1            1/1     Running   0          12m    192.168.61.11   node1   <none>

 

       7、使master节点参与工作负载

1 kubectl describe node node1 | grep Taint
2 Taints:             node-role.kubernetes.io/master:NoSchedule

因为这里搭建的是测试环境,去掉这个污点使node1参与工作负载:

1 kubectl taint nodes node1 node-role.kubernetes.io/master-
2 node "node1" untainted

        8、测试保证所有pod都在Running状态

 1 [root@K8s-master ~]# kubectl get pod --all-namespaces -o wide
 2 NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE     IP              NODE         NOMINATED NODE   READINESS GATES
 3 default       curl-66959f6557-6qvpz                1/1     Running   1          23h     10.244.0.6      k8s-master   <none>           <none>
 4 default       nginx-7cdbd8cdc9-nvkcl               1/1     Running   0          5h56m   10.244.1.2      k8s-node1    <none>           <none>
 5 kube-system   coredns-78d4cf999f-2zg4q             1/1     Running   1          23h     10.244.0.5      k8s-master   <none>           <none>
 6 kube-system   coredns-78d4cf999f-snnkz             1/1     Running   1          23h     10.244.0.7      k8s-master   <none>           <none>
 7 kube-system   etcd-k8s-master                      1/1     Running   2          23h     192.168.1.100   k8s-master   <none>           <none>
 8 kube-system   kube-apiserver-k8s-master            1/1     Running   6          23h     192.168.1.100   k8s-master   <none>           <none>
 9 kube-system   kube-controller-manager-k8s-master   1/1     Running   7          23h     192.168.1.100   k8s-master   <none>           <none>
10 kube-system   kube-flannel-ds-amd64-bb6m8          1/1     Running   1          23h     192.168.1.100   k8s-master   <none>           <none>
11 kube-system   kube-flannel-ds-amd64-px2fv          1/1     Running   1          22h     192.168.1.101   k8s-node1    <none>           <none>
12 kube-system   kube-proxy-bfgq4                     1/1     Running   2          23h     192.168.1.100   k8s-master   <none>           <none>
13 kube-system   kube-proxy-p2hqr                     1/1     Running   1          22h     192.168.1.101   k8s-node1    <none>           <none>
14 kube-system   kube-scheduler-k8s-master            1/1     Running   7          23h     192.168.1.100   k8s-master   <none>           <none>

       

        9、测试DNS

[root@K8s-master ~]# kubectl run curl --image=radial/busyboxplus:curl -it 
kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead. If you don't see a command prompt, try pressing enter.
#进入后执行nslookup kubernetes.default确认解析正常:
[ root@curl-66959f6557-6qvpz:/ ]$ nslookup kubernetes.default
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
[ root@curl-66959f6557-6qvpz:/ ]$

 

        10、向Kubernetes集群中添加Node节点

 1 [root@K8s-master ~]# kubeadm join 192.168.1.100:6443 --token istyp6.rzgpkpjpv0l3b5f8 --discovery-token-ca-cert-hash sha256:2ec77ce65a291770f6fcf42b60fc5b2200a8a381d46ce2b1bf7ec73310a95727 --ignore-preflight-errors=Swap
 2 
 3 [preflight] running pre-flight checks
 4         [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_rr ip_vs_wrr ip_vs_sh ip_vs] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
 5 you can solve this problem with following methods:
 6  1. Run 'modprobe -- ' to load missing kernel modules;
 7 2. Provide the missing builtin kernel ipvs support
 8 
 9         [WARNING Swap]: running with swap on is not supported. Please disable swap
10 [discovery] Trying to connect to API Server "192.168.61.11:6443"
11 [discovery] Created cluster-info discovery client, requesting info from "https://192.168.61.11:6443"
12 [discovery] Requesting info from "https://192.168.61.11:6443" again to validate TLS against the pinned public key
13 [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.61.11:6443"
14 [discovery] Successfully established connection with API Server "192.168.61.11:6443"
15 [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
16 [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
17 [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
18 [preflight] Activating the kubelet service
19 [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
20 [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node2" as an annotation
21 
22 This node has joined the cluster:
23 * Certificate signing request was sent to apiserver and a response was received.
24 * The Kubelet was informed of the new secure connection details.
25 
26 Run 'kubectl get nodes' on the master to see this node join the cluster.

 

 

        11、查看集群中的节点:  

kubectl get nodes
NAM        E      STATUS    ROLES     AGE       VERSION
k8s-master     Ready     master    26m       v1.13.4
k8s-node1      Ready     <none>    2m        v1.13.4

 

        12、如果需要从集群中移除node2这个Node执行下面的命令:

在master节点上执行:

kubectl drain node1 --delete-local-data --force --ignore-daemonsets
kubectl delete node node1

在node1上执行:

1 kubeadm reset
2 ifconfig cni0 down
3 ip link delete cni0
4 ifconfig flannel.1 down
5 ip link delete flannel.1
6 rm -rf /var/lib/cni/

由于时间有限,不保证没有意外会出错,请多多查阅相关技术文档,以保证正常运行。

 

原文出处:https://www.cnblogs.com/Smbands/p/10520142.html

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!