Kubernetes集群实践(08)升级集群

余生颓废 提交于 2020-03-21 22:00:59

ECK在集群上还没部署多久还没有正式投入使用,试用阶段也发现了不少问题,现在马上要正式的部署了,想着在正式部署的时候把k8s集群升级。看了些资料,由于我的集群只有一个master节点虽然很强大,一台HUAWEI Taishan2280v2 服务器配双路鲲鹏920(单路48核,所以物尽其用,跑了master节点,traefik和kubernetes-dashboard,nexus没有arm64的镜像,很可惜),但终归是个单点。因此,在没有什么负载的情况下把升级工作做了。
集群升级分为Kubernetes编排引擎升级和Docker容器运行引擎升级

Kubernetes升级

Kubernetes升级强烈建议参考官方文档,看英文没有问题,下面的可以忽略了,我这里可以看作个人翻译笔记。我的环境是离线升级,对应的rpm包和docker镜像包提前准备好了,具体方法可以参看本系列的前面3个章节。
官方文档地址:https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/

主节点升级

升级第1个控制节点(主节点)

1.升级kubeadm

# replace x in 1.17.x-0 with the latest patch version
sudo um install -y kubeadm-1.17.x-0 --disableexcludes=kubernetes

核对kubeadm的版本

sudo kubeadm version

2.排空节点上的容器

# replace <cp-node-name> with the name of your control plane node
sudo kubectl drain <cp-node-name> --ignore-daemonsets

3.查看升级计划

sudo kubeadm upgrade plan

执行后可以看见类似如下的结果,版本号根据实际操作有所区别:

[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.16.0
[upgrade/versions] kubeadm version: v1.17.0

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
Kubelet     1 x v1.16.0   v1.17.0

Upgrade to the latest version in the v1.16 series:

COMPONENT            CURRENT   AVAILABLE
API Server           v1.16.0   v1.17.0
Controller Manager   v1.16.0   v1.17.0
Scheduler            v1.16.0   v1.17.0
Kube Proxy           v1.16.0   v1.17.0
CoreDNS              1.6.2     1.6.5
Etcd                 3.3.15    3.4.3-0

You can now apply the upgrade by executing the following command:

    kubeadm upgrade apply v1.17.0

上述命令只是检查集群是否能够升级,以及可以升级的版本。

4.输入实际的版本执行升级计划

# replace x with the patch version you picked for this upgrade
sudo kubeadm upgrade apply v1.17.x

执行后可以在末尾看见

......
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.17.0". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

5.手动升级CNI provider plugin

如flannel,由于此次升级网络插件没有变化,因此没有升级。

6.解封节点

# replace <cp-node-name> with the name of your control plane node
sudo kubectl uncordon <cp-node-name>

升级其它控制节点(主节点)

直接执行升级命令:

sudo kubeadm upgrade node

每个控制节点上升级kubelet和kubectl

# replace x in 1.17.x-0 with the latest patch version
sudo yum install -y kubelet-1.17.x-0 kubectl-1.17.x-0 --disableexcludes=kubernetes

重新加载kubelet并重启

sudo systemctl daemon-reload
sudo systemctl restart kubelet

工作节点升级

1.升级kubeadm

# replace x in 1.17.x-0 with the latest patch version
sudo yum install -y kubeadm-1.17.x-0 --disableexcludes=kubernetes

2.排空节点上的容器

# replace <node-to-drain> with the name of your node you are draining
sudo kubectl drain <node-to-drain> --ignore-daemonsets

上述命令执行后可能出现如下类似输出:

node/ip-172-31-85-18 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-proxy-dj7d7, kube-system/weave-net-z65qx
node/ip-172-31-85-18 drained

3.升级kubelet的配置

sudo kubeadm upgrade node

4.升级kubelet和kubectl

# replace x in 1.17.x-0 with the latest patch version
yum install -y kubelet-1.17.x-0 kubectl-1.17.x-0 --disableexcludes=kubernetes

重载并重启kubectl

sudo systemctl daemon-reload
sudo systemctl restart kubelet

5.解封节点

# replace <node-to-drain> with the name of your node 
sudo kubectl uncordon <node-to-drain>

验证升级结果

sudo kubectl get nodes
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!