kubernetes集群(k8s)二进制安装(centos7环境)
1.环境准备与规划
本文使用的centos7,docker 1.13.1
1.1 k8s,etcd下载
1.2虚拟机设置master和node节点
角色 | ip | 组件安装 |
---|---|---|
master | 192.168.100.100 | etcd、kube-apiserver、kube-controller-manager、 kube-scheduler、docker |
node01 | 192.168.100.101 | kube-proxy、kubelet、docker |
node02 | 192.168.100.102 | kube-proxy、kubelet、docker |
2.三台机器操作
2.1关闭防火墙,更新yum
#关闭CentOS防火墙
systemctl disable firewalld
systemctl stop firewalld
##更新yum
yum -y update
# 安装Docker( 参数-y 默认安装时 自动选择Y)
yum -y install docker
#启动docker
service docker start
#要求服务器重启自动启动docker 可输入该命令systemctl enable docker 实现
#创建改目录,将k8s相关文件都放在这里
mkdir /var/local/k8s
cd /var/local/k8s
2.2如果出现问题可以查看系统日志命令:tail -f /var/log/messages或者journalctl -f -u serviceName
3.master安装
上传etcd-v3.3.23-linux-amd64.tar.gz kubernetes-server-linux-amd64.tar.gz到k8s目录
3.1 etcd服务
#解压etcd
tar -zxvf etcd-v3.3.10-linux-amd64.tar.gz
#进入目录 etcd-v3.3.23-linux-amd64 目录 并且将etcd和etcdctl文件复制到/usr/bin目录
cd etcd-v3.3.10-linux-amd64
cp etcd etcdctl /usr/bin/
3.1.1 配置etcd.service
#配置systemd服务文件
vi /usr/lib/systemd/system/etcd.service
#添加如下内容
[Unit]
Description=Etcd Server
[Service]
Type=notify
TimeoutStartSec=0
Restart=always
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
ExecStart=/usr/bin/etcd
[Install]
WantedBy=multi-user.target
3.1.2创建配置文件
mkdir -p /var/lib/etcd/
mkdir -p /var/lib/etcd/
vi /etc/etcd/etcd.conf
ETCD_NAME=ETCD Server
ETCD_DATA_DIR="/var/lib/etcd/"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_ADVERTISE_CLIENT_URLS="http://127.0.0.1:2379"
3.1.3 启动与测试etcd服务
#重新加载服务的配置文件
systemctl daemon-reload
#添加开机自启动
systemctl enable etcd.service
systemctl start etcd.service
systemctl status etcd.service
[root@localhost ~]# etcdctl cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://localhost:2379
cluster is healthy
etcd 到这里就搭建完成了
执行下面操作前,先解压kubernetes-server-linux-amd64.tar.gz
tar -zxvf kubernetes-server-linux-amd64.tar.gz
#进入 解压后将kube-apiserver、kube-controller-manager、kube-scheduler以及管理要使用的kubectl二进制命令文件
cd kubernetes/server/bin/
cp kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/bin/
3.2 kube-apiserver服务 搭建
#编辑systemd服务文件
vi /usr/lib/systemd/system/kube-apiserver.service
#添加如下内容
[Unit]
Description=Kubernetes API Server
After=etcd.service
Wants=etcd.service
[Service]
EnvironmentFile=/etc/kubernetes/apiserver
ExecStart=/usr/bin/kube-apiserver \
$KUBE_ETCD_SERVERS \
$KUBE_API_ADDRESS \
$KUBE_API_PORT \
$KUBE_SERVICE_ADDRESSES \
$KUBE_ADMISSION_CONTROL \
$KUBE_API_LOG \
$KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
添加apiservice的配置文件
mkdir /etc/kubernetes
vi /etc/kubernetes/apiserver
#添加如下内容
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_API_PORT="--insecure-port=8080"
KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=192.168.0.0/16"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
KUBE_API_LOG="--logtostderr=false --log-dir=/var/log/kubernets/apiserver --v=2"
KUBE_API_ARGS=" "
3.2.1 测试kube-apiserver环境配置
systemctl daemon-reload
systemctl start kube-apiserver.service
#加入开机自启动
systemctl enable kube-apiserver.service
# systemctl status kube-apiserver.service
● kube-apiserver.service - Kubernetes API Server
Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
Active: active (running) since 三 2021-01-13 13:42:06 CST; 14min ago
Main PID: 1577 (kube-apiserver)
3.3 kube-controller-manager服务 搭建
kube-controller-manager服务依赖于kube-apiserver服务
配置systemd服务文件:vi /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Scheduler
After=kube-apiserver.service
Requires=kube-apiserver.service
[Service]
EnvironmentFile=-/etc/kubernetes/controller-manager
ExecStart=/usr/bin/kube-controller-manager \
$KUBE_MASTER \
$KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
配置文件 vi /etc/kubernetes/controller-manager
KUBE_MASTER="--master=http://127.0.0.1:8080"
KUBE_CONTROLLER_MANAGER_ARGS=" "
3.3.1 测试 kube-controller-manager环境搭建
systemctl daemon-reload
systemctl start kube-controller-manager.service
#同上出现 running 就说明搭建成功
systemctl status kube-controller-manager.service
#加入开机自启动
systemctl enable kube-controller-manager.service
3.4 kube-scheduler服务
kube-scheduler服务也依赖于kube-apiserver服务。
配置systemd服务文件:vi /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
After=kube-apiserver.service
Requires=kube-apiserver.service
[Service]
User=root
EnvironmentFile=-/etc/kubernetes/scheduler
ExecStart=/usr/bin/kube-scheduler \
$KUBE_MASTER \
$KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
配置文件:vi /etc/kubernetes/scheduler
KUBE_MASTER="--master=http://127.0.0.1:8080"
KUBE_SCHEDULER_ARGS="--logtostderr=true --log-dir=/var/log/kubernetes/scheduler --v=2"
3.5统一开机自启测试命令
#完成以上配置后,按顺序启动服务
systemctl daemon-reload
systemctl enable kube-apiserver.service
systemctl start kube-apiserver.service
systemctl enable kube-controller-manager.service
systemctl start kube-controller-manager.service
systemctl enable kube-scheduler.service
systemctl start kube-scheduler.service
#检查每个服务的健康状态:
systemctl status kube-apiserver.service
systemctl status kube-controller-manager.service
systemctl status kube-scheduler.service
3.6.健康查看
#查看主键状态
[root@localhost ~]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {
"health":"true"}
4.node安装
4.1环境准备
Node节点上安装组件有:
- docker(之前已经在机器上面部署了docker,不知情可查看2.1步骤)
- kube-proxy
- kubelet
这里可以选择下载kubernetes-node-linux-amd64.tar.gz,也可以用直接的kubernetes-server-linux-amd64.tar.gz。
server里面包含了 node节点需要的东西(这里使用 kubernetes-node-linux-amd64.tar.gz操作)
#切换到该目录并且上传node包
cd /var/local/k8s
#解压
tar -zxvf kubernetes-node-linux-amd64.tar.gz
cd kubernetes/node/bin
#复制kubelet kube-proxy放到/usr/bin目录中
cp kubelet kube-proxy /usr/bin/
4.2kube-proxy安装
vi /usr/lib/systemd/system/kube-proxy.service
[Unit] Description=Kubernetes Kube-Proxy Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target [Service] EnvironmentFile=/etc/kubernetes/config EnvironmentFile=/etc/kubernetes/proxy ExecStart=/usr/bin/kube-proxy \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBE_MASTER \ $KUBE_PROXY_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target
创建配置目录,并添加配置文件
mkdir -p /etc/kubernetes vi /etc/kubernetes/proxy #添加如下内容 KUBE_PROXY_ARGS="" vi /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow_privileged=false"
#指定你的master地址
KUBE_MASTER="--master=http://192.168.100.100:8080"
4.2.1 proxy验证
# systemctl daemon-reload
# systemctl start kube-proxy
# netstat -lntp | grep kube-proxy
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 8954/kube-proxy
tcp6 0 0 :::10256 :::* LISTEN 8954/kube-proxy
4.4 kubelet安装
vi /usr/lib/systemd/system/kubelet.service
[Unit] Description=Kubernetes Kubelet Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=docker.service Requires=docker.service [Service] WorkingDirectory=/var/lib/kubelet EnvironmentFile=/etc/kubernetes/kubelet ExecStart=/usr/bin/kubelet $KUBELET_ARGS Restart=on-failure KillMode=process [Install] WantedBy=multi-user.target
mkdir -p /var/lib/kubelet
vi /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0" #你的nodeip,这里填写本机ip KUBELET_HOSTNAME="--hostname-override=192.168.100.101" #你的masterIp KUBELET_API_SERVER="--api-servers=http://192.168.100.100:8080" KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=reg.docker.tb/harbor/pod-infrastructure:latest" KUBELET_ARGS=" --cluster-dns=192.168.100.101 --cluster-domain=cluster.node1 --enable-server=true --feature-gates=AttachVolumeLimit=false --enable-debugging-handlers=true --fail-swap-on=false --kubeconfig=/var/lib/kubelet/kubeconfig"
创建配置文件
vi /var/lib/kubelet/kubeconfig
向master进行注册
apiVersion: v1 kind: Config users: - name: kubelet clusters: - name: kubernetes cluster: server: http://192.168.100.100:8080 #你的masterIp contexts: - context: cluster: kubernetes user: kubelet name: service-account-context current-context: service-account-context
4.5 节点效验
systemctl daemon-reload
systemctl start kubelet.service
# netstat -tnlp | grep kubelet ##正常情况应该出现,如果没有出现请查看 5步骤
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 9812/kubelet
tcp 0 0 127.0.0.1:44458 0.0.0.0:* LISTEN 9812/kubelet
tcp6 0 0 :::10255 :::* LISTEN 9812/kubelet
tcp6 0 0 :::10250 :::* LISTEN 9812/kubelet
#然后将kubelet丶kube-proxy加入开机自动启动
systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet
systemctl enable kube-proxy
systemctl start kube-proxy
systemctl status kube-proxy
5.node注册不成功问题
# tail -f /var/log/messages Jan 13 14:04:19 localhost kubelet: F0113 14:04:19.645151 9505 server.go:262] failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"
如果出现上述问题,问题原因在于 kubelet 的cgroup 与 docker 的不符合,解决方法如下
-
vi /usr/lib/systemd/system/kubelet.service
[Unit] Description=Kubernetes Kubelet Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=docker.service Requires=docker.service [Service] WorkingDirectory=/var/lib/kubelet EnvironmentFile=/etc/kubernetes/kubelet ExecStart=/usr/bin/kubelet --cgroup-driver=systemd --runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice $KUBELET_ARGS Restart=on-failure KillMode=process [Install] WantedBy=multi-user.target
- 然后重新执行4.5步骤
6.查看master是否获取到node节点
[root@localhost ~]# kubectl get node NAME STATUS ROLES AGE VERSION localhost.localdomain Ready <none> 2m25s v1.12.1
到这里就全部完成了,其他node安装相同的方式搭建,或者复制node虚拟机修改/etc/kubernetes/kubelet里面的本机Ip
localhost.localdomain 为本机名字,修改命令为: hostnamectl set-hostname yourName
来源:oschina
链接:https://my.oschina.net/u/4369158/blog/4900663