1. 架构篇
1.1 kubernetes 架构说明
1.2 Flannel网络架构图
1.3 Kubernetes工作流程
2. 组件介绍
2.1 Master节点
2.1.1 、网关服务 API Server:提供Kubernetes API接口,主要处理REST操作以及更新ETCD中的对象。所有资源增删改查的唯一入口
只有API Server才直接操作etcd
其他模块通过API Server查询活修改数据
提供其他模块之间的数据交互和通信的枢纽
2.1.2、 调度器 Scheduler:资源调度,负责分配调度Pod到集群内的Node节点
监听kube-apiserver,查询还未分配Node的Pod
根据调度策略为这些Pod分配节点
2.1.3、 控制器 Controller Manager:所有其他群集级别的功能。目前由控制器Manager执行。资源对象的自动化控制中心。它通过apiserver监控整个集群的状态,并确保集群处于预期的工作状态。
2.1.4、 存储 ETCD:所有持久化的状态信息存储在ETCD中
2.2 Node节点
2.2.1、Kubelet:管理Pods以及容器、镜像、Volume等,实现对集群对节点的管理。
2.2.2、Kube-proxy:提供网络代理以及负载均衡,实现与Service通信。
2.2.3、Docker:负责节点的容器管理工作
3.环境说明
3.1 部署节点说明
主机名 | IP | 用途 | 部署软件 |
---|---|---|---|
linux-node1 | 172.16.1.31 | master | apiserver,scheduler,controller-manager etcd,flanneld |
linux-node2 | 172.16.1.32 | node | kubelet,kube-proxy etcd,flanneld |
linux-node3 | 172.16.1.33 | node | kubelet,kube-proxy etcd,flanneld |
3.2 软件包版本
软件包 | 下载地址 |
---|---|
kubernetes-node-linux-amd64.tar.gz | https://dl.k8s.io/v1.10.1/kubernetes-node-linux-amd64.tar.gz |
kubernetes-server-linux-amd64.tar.gz | https://dl.k8s.io/v1.10.1/kubernetes-server-linux-amd64.tar.gz |
kubernetes-client-linux-amd64.tar.gz | https://dl.k8s.io/v1.10.1/kubernetes-client-linux-amd64.tar.gz |
kubernetes.tar.gz | https://dl.k8s.io/v1.10.1/kubernetes.tar.gz |
flannel-v0.11.0-linux-amd64.tar.gz | https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz |
cni-plugins-amd64-v0.7.1.tgz | https://github.com/containernetworking/plugins/releases/download/v0.7.1/cni-plugins-amd64-v0.7.1.tgz |
etcd-v3.2.18-linux-amd64.tar.gz | https://github.com/coreos/etcd/releases/download/v3.2.18/etcd-v3.2.18-linux-amd64.tar.gz |
4.Kubernetes 安装
4.1 初始化环境
4.1.1、设置关闭防火墙及SELINUX,关闭swap
systemctl stop firewalld && systemctl disable firewalld
setenforce 0
vi /etc/selinux/config
SELINUX=disabled
swapoff -a && sysctl -w vm.swappiness=0
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
4.1.2、下载国内docker源,部署docker
cd /etc/yum.repos.d/
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum clean all && yum repolist -y
yum install -y docker-ce
systemctl start docker
4.1.3. 准备部署目录
mkdir -p /opt/kubernetes/{cfg,bin,ssl,log}
# scp -r /opt/kubernetes 172.16.1.32:/opt/
# scp -r /opt/kubernetes 172.16.1.33:/opt/
4.1.4、添加启动命令所在目录环境变量
vim ~/.bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin:/opt/kubernetes/bin/
export PATH
source ~/.bash_profile
# scp ~/.bash_profile 172.16.1.32:~/
# scp ~/.bash_profile 172.16.1.33:~/
4.1.5、配置内核参数【需重启服务器】
cat /etc/sysctl.conf
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
vm.swappiness = 0
net.ipv4.neigh.default.gc_stale_time=120
net.ipv4.ip_forward = 1
# see details in https://help.aliyun.com/knowledge_detail/39428.html
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_announce=2
net.ipv4.conf.all.arp_announce=2
# see details in https://help.aliyun.com/knowledge_detail/41334.html
net.ipv4.tcp_max_tw_buckets = 5000
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 1024
net.ipv4.tcp_synack_retries = 2
kernel.sysrq = 1
# iptables透明网桥的实现
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1
4.2 安装制作CA证书工具【kubernetes 系统的各组件需要使用 TLS 证书对通信进行加密】
4.2.1. 安装CFSSL
[root@linux-node1 ~]# cd /usr/local/src
[root@linux-node1 src]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
[root@linux-node1 src]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
[root@linux-node1 src]# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
[root@linux-node1 src]# chmod +x cfssl*
[root@linux-node1 src]# mv cfssl-certinfo_linux-amd64 /opt/kubernetes/bin/cfssl-certinfo
[root@linux-node1 src]# mv cfssljson_linux-amd64 /opt/kubernetes/bin/cfssljson
[root@linux-node1 src]# mv cfssl_linux-amd64 /opt/kubernetes/bin/cfssl
#复制cfssl命令文件到k8s-node1和k8s-node2节点。如果实际中多个节点,就都需要同步复制。
# scp /opt/kubernetes/bin/cfssl* 172.16.1.32:/opt/kubernetes/bin/
# scp /opt/kubernetes/bin/cfssl* 172.16.1.33:/opt/kubernetes/bin/
4.2.2. 生成模板文件
[root@linux-node1 ~]# cd /usr/local/src
[root@linux-node1 src]# mkdir ssl && cd ssl
[root@linux-node1 ssl]# cfssl print-defaults config > config.json #默认证书生产策略配置模板
[root@linux-node1 ssl]# cfssl print-defaults csr > csr.json #默认csr请求模板
4.2.3. 创建用来生成CA文件的JSON配置文件
[root@linux-node1 ~]# vim /usr/local/src/ssl/ca-config.json
{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "8760h"
}
}
}
}
4.2.4.创建用来生成 CA 证书签名请求(CSR)的 JSON 配置文件
[root@linux-node1 ~]# vim /usr/local/src/ssl/ca-csr.json
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
4.2.5. 生成CA证书(ca.pem)和秘钥(ca-key.pem)
[root@linux-node1 ~]# cd /usr/local/src/ssl
[root@ linux-node1 ssl]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca #初始化创建CA认证中心,生成 ca-key.pem(私钥) ca.pem(公钥)
[root@ linux-node1 ssl]# ls -l ca*
-rw-r--r-- 1 root root 290 Mar 4 13:45 ca-config.json
-rw-r--r-- 1 root root 1001 Mar 4 14:09 ca.csr
-rw-r--r-- 1 root root 208 Mar 4 13:51 ca-csr.json
-rw------- 1 root root 1679 Mar 4 14:09 ca-key.pem
-rw-r--r-- 1 root root 1359 Mar 4 14:09 ca.pem
4.2.6.分发证书
[root@ linux-node1 ssl]# cp ca.csr ca.pem ca-key.pem ca-config.json /opt/kubernetes/ssl
SCP证书到k8s-node1和k8s-node2节点
# scp ca.csr ca.pem ca-key.pem ca-config.json 172.16.1.32:/opt/kubernetes/ssl
# scp ca.csr ca.pem ca-key.pem ca-config.json 172.16.1.33:/opt/kubernetes/ssl
4.3 部署ETCD集群
4.3.1. 准备etcd软件包
[root@linux-node1 ~]# cd /usr/local/src && wget https://github.com/coreos/etcd/releases/download/v3.2.18/etcd-v3.2.18-linux-amd64.tar.gz
[root@linux-node1 src]# tar zxf etcd-v3.2.18-linux-amd64.tar.gz
[root@linux-node1 src]# cd etcd-v3.2.18-linux-amd64
[root@linux-node1 etcd-v3.2.18-linux-amd64]# cp etcd etcdctl /opt/kubernetes/bin/
# scp etcd etcdctl 172.16.1.32:/opt/kubernetes/bin/
# scp etcd etcdctl 172.16.1.33:/opt/kubernetes/bin/
4.3.2. 创建etcd证书签名请求
[root@linux-node1 src]# cd /usr/local/src
[root@linux-node1 src]# vim /usr/local/src/etcd-csr.json
{
"CN": "etcd",
"hosts": [
"127.0.0.1",
"172.16.1.31",
"172.16.1.32",
"172.16.1.33"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
4.3.3. 生成etcd证书和私钥
[root@linux-node1 ~]# cd /usr/local/src
[root@linux-node1 src]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem -ca-key=/opt/kubernetes/ssl/ca-key.pem -config=/opt/kubernetes/ssl/ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
# 会生成以下证书文件
[root@k8s-master src]# ls -l etcd*
-rw-r--r-- 1 root root 1045 Mar 5 11:27 etcd.csr
-rw-r--r-- 1 root root 257 Mar 5 11:25 etcd-csr.json
-rw------- 1 root root 1679 Mar 5 11:27 etcd-key.pem
-rw-r--r-- 1 root root 1419 Mar 5 11:27 etcd.pem
4.3.4. 将证书移动到/opt/kubernetes/ssl目录下
[root@k8s-master src]# cp etcd*.pem /opt/kubernetes/ssl
# scp etcd*.pem 172.16.1.32:/opt/kubernetes/ssl
# scp etcd*.pem 172.16.1.33:/opt/kubernetes/ssl
[root@linux-node1 src]# rm -f etcd.csr etcd-csr.json
4.3.5. 设置etcd配置文件【etcd配置文件需手动创建生成】
#其他节点 灰色背景标注 需要修改
[root@linux-node1 ~]# vim /opt/kubernetes/cfg/etcd.conf
#[member]
ETCD_NAME="etcd-node1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_SNAPSHOT_COUNTER="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
ETCD_LISTEN_PEER_URLS="https://172.16.1.31:2380"
ETCD_LISTEN_CLIENT_URLS="https://172.16.1.31:2379,https://127.0.0.1:2379"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#ETCD_CORS=""
#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.1.31:2380"
# if you use different ETCD_NAME (e.g. test),
# set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
ETCD_INITIAL_CLUSTER="etcd-node1=https://172.16.1.31:2380,etcd-node2=https://172.16.1.32:2380,etcd-node3=https://172.16.1.33:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://172.16.1.31:2379"
#[security]
CLIENT_CERT_AUTH="true"
ETCD_CA_FILE="/opt/kubernetes/ssl/ca.pem"
ETCD_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
ETCD_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"
PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_CA_FILE="/opt/kubernetes/ssl/ca.pem"
ETCD_PEER_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"
4.3.6. 创建etcd系统服务
[root@linux-node1 ~]# vim /etc/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
[Service]
Type=simple
WorkingDirectory=/var/lib/etcd
EnvironmentFile=-/opt/kubernetes/cfg/etcd.conf
# set GOMAXPROCS to number of processors
ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /opt/kubernetes/bin/etcd"
Type=notify
[Install]
WantedBy=multi-user.target
4.3.7. 重新加载系统服务
[root@linux-node1 ~]# systemctl daemon-reload
[root@linux-node1 ~]# systemctl enable etcd
# scp /opt/kubernetes/cfg/etcd.conf 172.16.1.32:/opt/kubernetes/cfg/
# scp /opt/kubernetes/cfg/etcd.conf 172.16.1.33:/opt/kubernetes/cfg/
# scp /etc/systemd/system/etcd.service 172.16.1.32:/etc/systemd/system/
# scp /etc/systemd/system/etcd.service 172.16.1.33:/etc/systemd/system/
#在所有节点上创建etcd存储目录并启动etcd
[root@linux-node1 ~]# mkdir /var/lib/etcd
[root@linux-node1 ~]# systemctl start etcd
[root@linux-node1 ~]# systemctl status etcd
4.3.8. 验证集群
[root@linux-node1 ~]# etcdctl --endpoints=https://172.16.1.31:2379 --ca-file=/opt/kubernetes/ssl/ca.pem --cert-file=/opt/kubernetes/ssl/etcd.pem --key-file=/opt/kubernetes/ssl/etcd-key.pem cluster-health
member 435fb0a8da627a4c is healthy: got healthy result from https://172.16.1.32:2379
member 6566e06d7343e1bb is healthy: got healthy result from https://172.16.1.31:2379
member ce7b884e428b6c8c is healthy: got healthy result from https://172.16.1.33:2379
cluster is healthy
4.4 Master节点部署 【Kubernetes API服务】
4.4.1.1 【部署Kubernetes API服务部署】准备软件包
[root@linux-node1 ~]# #cd /usr/local/src && wget https://dl.k8s.io/v1.10.1/kubernetes-server-linux-amd64.tar.gz #需要代理上网下载
[root@linux-node1 ~]# #cd /usr/local/src && tar xf kubernetes-server-linux-amd64.tar.gz
[root@linux-node1 ~]# cd /usr/local/src/kubernetes
[root@linux-node1 kubernetes]# cp server/bin/kube-apiserver /opt/kubernetes/bin/
[root@linux-node1 kubernetes]# cp server/bin/kube-controller-manager /opt/kubernetes/bin/
[root@linux-node1 kubernetes]# cp server/bin/kube-scheduler /opt/kubernetes/bin/
4.4.1.2【部署Kubernetes API服务部署】创建生成CSR的 JSON 配置文件
[root@linux-node1 src]# vim /usr/local/src/ssl/kubernetes-csr.json
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"172.16.1.31",
"10.1.0.1",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
4.4.1.3【部署Kubernetes API服务部署】生成 kubernetes 证书和私钥
[root@linux-node1 ssl]# cd /usr/local/src/ssl/
[root@linux-node1 ssl]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem -ca-key=/opt/kubernetes/ssl/ca-key.pem -config=/opt/kubernetes/ssl/ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
[root@linux-node1 src]# cp kubernetes*.pem /opt/kubernetes/ssl/
# scp kubernetes*.pem 172.16.1.32:/opt/kubernetes/ssl/
# scp kubernetes*.pem 172.16.1.33:/opt/kubernetes/ssl/
4.4.1.4【部署Kubernetes API服务部署】创建 kube-apiserver 使用的客户端 token 文件
[root@linux-node1 ~]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
cebfb6641d0845bd61808e2337955ea0
[root@linux-node1 ~]# vim /opt/kubernetes/ssl/bootstrap-token.csv
cebfb6641d0845bd61808e2337955ea0,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
4.4.1.5【部署Kubernetes API服务部署】创建基础用户名/密码认证配置
[root@linux-node1 ~]# vim /opt/kubernetes/ssl/basic-auth.csv
admin,admin,1
readonly,readonly,2
4.4.1.6【部署Kubernetes API服务部署】部署Kubernetes API Server (配置文件中指定service对外访问生成的随机端口范围)
[root@linux-node1 ~]# vim /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
ExecStart=/opt/kubernetes/bin/kube-apiserver \
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriction \
--bind-address=172.16.1.31 \
--insecure-bind-address=127.0.0.1 \
--authorization-mode=Node,RBAC \
--runtime-config=rbac.authorization.k8s.io/v1 \
--kubelet-https=true \
--anonymous-auth=false \
--basic-auth-file=/opt/kubernetes/ssl/basic-auth.csv \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/ssl/bootstrap-token.csv \
--service-cluster-ip-range=10.1.0.0/16 \
--service-node-port-range=20000-40000 \
--tls-cert-file=/opt/kubernetes/ssl/kubernetes.pem \
--tls-private-key-file=/opt/kubernetes/ssl/kubernetes-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/kubernetes/ssl/ca.pem \
--etcd-certfile=/opt/kubernetes/ssl/kubernetes.pem \
--etcd-keyfile=/opt/kubernetes/ssl/kubernetes-key.pem \
--etcd-servers=https://172.16.1.31:2379,https://172.16.1.32:2379,https://172.16.1.33:2379 \
--enable-swagger-ui=true \
--allow-privileged=true \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/opt/kubernetes/log/api-audit.log \
--event-ttl=1h \
--v=2 \
--logtostderr=false \
--log-dir=/opt/kubernetes/log
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
4.4.1.7【部署Kubernetes API服务部署】启动API Server服务
[root@linux-node1 ~]# systemctl daemon-reload
[root@linux-node1 ~]# systemctl enable kube-apiserver
[root@linux-node1 ~]# systemctl start kube-apiserver
4.4.1.8【部署Kubernetes API服务部署】查看API Server服务状态
[root@linux-node1 ~]# systemctl status kube-apiserver
4.4.2.1【部署Controller Manager(控制服务)】配置Controller Manager
[root@linux-node1 ~]# vim /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/opt/kubernetes/bin/kube-controller-manager \
--address=127.0.0.1 \
--master=http://127.0.0.1:8080 \
--allocate-node-cidrs=true \
--service-cluster-ip-range=10.1.0.0/16 \
--cluster-cidr=10.2.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
--root-ca-file=/opt/kubernetes/ssl/ca.pem \
--leader-elect=true \
--v=2 \
--logtostderr=false \
--log-dir=/opt/kubernetes/log
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
4.4.2.2【部署Controller Manager(控制服务)】启动Controller Manager
[root@linux-node1 ~]# systemctl daemon-reload
[root@linux-node1 scripts]# systemctl enable kube-controller-manager
[root@linux-node1 scripts]# systemctl start kube-controller-manager
4.4.2.3【部署Controller Manager(控制服务)】查看服务状态
[root@linux-node1 scripts]# systemctl status kube-controller-manager
4.4.3.1【部署Kubernetes Scheduler(调度服务)】配置Kubernetes Scheduler
[root@linux-node1 ~]# vim /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/opt/kubernetes/bin/kube-scheduler \
--address=127.0.0.1 \
--master=http://127.0.0.1:8080 \
--leader-elect=true \
--v=2 \
--logtostderr=false \
--log-dir=/opt/kubernetes/log
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
4.4.3.2【部署Kubernetes Scheduler(调度服务)】部署服务
[root@linux-node1 ~]# systemctl daemon-reload
[root@linux-node1 scripts]# systemctl enable kube-scheduler
[root@linux-node1 scripts]# systemctl start kube-scheduler
[root@linux-node1 scripts]# systemctl status kube-scheduler
4.4.3.3【部署kubectl 命令行工具】准备二进制命令包
[root@linux-node1 ~]# #cd /usr/local/src && wget https://dl.k8s.io/v1.10.1/kubernetes-client-linux-amd64.tar.gz #需要代理上网下载
[root@linux-node1 ~]# #cd /usr/local/src && tar xf kubernetes-client-linux-amd64.tar.gz
[root@linux-node1 ~]# cd /usr/local/src/kubernetes/client/bin
[root@linux-node1 bin]# cp kubectl /opt/kubernetes/bin/
4.4.3.4【部署kubectl 命令行工具】创建 admin 证书签名请求
[root@linux-node1 ~]# cd /usr/local/src/ssl/
[root@linux-node1 ssl]# vim admin-csr.json
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "system:masters",
"OU": "System"
}
]
}
4.4.3.5【部署kubectl 命令行工具】生成 admin 证书和私钥
[root@linux-node1 ssl]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem -ca-key=/opt/kubernetes/ssl/ca-key.pem -config=/opt/kubernetes/ssl/ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
[root@linux-node1 ssl]# ls -l admin*
-rw-r--r-- 1 root root 1009 Mar 5 12:29 admin.csr
-rw-r--r-- 1 root root 229 Mar 5 12:28 admin-csr.json
-rw------- 1 root root 1675 Mar 5 12:29 admin-key.pem
-rw-r--r-- 1 root root 1399 Mar 5 12:29 admin.pem
[root@linux-node1 ssl]# mv admin*.pem /opt/kubernetes/ssl/
4.4.3.6【部署kubectl 命令行工具】设置集群参数
[root@linux-node1 src]# kubectl config set-cluster kubernetes --certificate-authority=/opt/kubernetes/ssl/ca.pem --embed-certs=true --server=https://172.16.1.31:6443
Cluster "kubernetes" set.
4.4.3.7【部署kubectl 命令行工具】设置客户端认证参数
[root@linux-node1 src]# kubectl config set-credentials admin --client-certificate=/opt/kubernetes/ssl/admin.pem --embed-certs=true --client-key=/opt/kubernetes/ssl/admin-key.pem
User "admin" set.
4.4.3.8【部署kubectl 命令行工具】设置上下文参数
[root@linux-node1 src]# kubectl config set-context kubernetes --cluster=kubernetes --user=admin
Context "kubernetes" created.
4.4.3.9【部署kubectl 命令行工具】设置默认上下文
[root@linux-node1 src]# kubectl config use-context kubernetes
Switched to context "kubernetes".
4.4.3.10【部署kubectl 命令行工具】使用kubectl工具(获取节点状态)
[root@linux-node1 ~]# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-1 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
4.5 Node节点部署
4.5.1.1【部署kubelet】二进制包准备 将软件包从linux-node1复制到linux-node2中去。
[root@linux-node1 bin]# cd /usr/local/src/kubernetes/server/bin/ && cp kubelet kube-proxy /opt/kubernetes/bin/
# scp kubelet kube-proxy 172.16.1.32:/opt/kubernetes/bin/
# scp kubelet kube-proxy 172.16.1.33:/opt/kubernetes/bin/
4.5.1.2【部署kubelet】创建角色绑定
[root@linux-node1 ~]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
clusterrolebinding "kubelet-bootstrap" created
4.5.1.3【部署kubelet】创建 kubelet bootstrapping kubeconfig 文件 设置集群参数
[root@linux-node1 ~]# kubectl config set-cluster kubernetes --certificate-authority=/opt/kubernetes/ssl/ca.pem --embed-certs=true --server=https://172.16.1.31:6443 --kubeconfig=bootstrap.kubeconfig
Cluster "kubernetes" set.
4.5.1.4【部署kubelet】设置客户端认证参数
[root@linux-node1 ~]# kubectl config set-credentials kubelet-bootstrap --token=cebfb6641d0845bd61808e2337955ea0 --kubeconfig=bootstrap.kubeconfig
User "kubelet-bootstrap" set.
4.5.1.5【部署kubelet】设置上下文参数
[root@linux-node1 ~]# kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=bootstrap.kubeconfig
Context "default" created.
4.5.1.6【部署kubelet】选择默认上下文
[root@linux-node1 ~]# kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
Switched to context "default".
[root@linux-node1 kubernetes]# cp /usr/local/src/kubernetes/server/bin/bootstrap.kubeconfig /opt/kubernetes/cfg
# scp /usr/local/src/kubernetes/server/bin/bootstrap.kubeconfig 172.16.1.32:/opt/kubernetes/cfg
# scp /usr/local/src/kubernetes/server/bin/bootstrap.kubeconfig 172.16.1.33:/opt/kubernetes/cfg
4.5.1.7【部署kubelet】部署kubelet 1.设置CNI支持
[root@linux-node1 ~]# mkdir -p /etc/cni/net.d
[root@linux-node1 ~]# vim /etc/cni/net.d/10-default.conf
{
"name": "flannel",
"type": "flannel",
"delegate": {
"bridge": "docker0",
"isDefaultGateway": true,
"mtu": 1400
}
}
# scp -r /etc/cni/net.d 172.16.1.32:/etc/cni/
# scp -r /etc/cni/net.d 172.16.1.33:/etc/cni/
4.5.1.8【部署kubelet】创建kubelet目录
[root@linux-node1 ~]# mkdir /var/lib/kubelet
# scp -r /var/lib/kubelet 172.16.1.32:/var/lib/
# scp -r /var/lib/kubelet 172.16.1.33:/var/lib/
4.5.1.9【部署kubelet】创建kubelet服务配置
# 灰色部分需要修改
[root@k8s-node1 ~]# vim /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/opt/kubernetes/bin/kubelet \
--address=172.16.1.31 \
--hostname-override=172.16.1.31 \
--pod-infra-container-image=mirrorgooglecontainers/pause-amd64:3.0 \
--experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--cert-dir=/opt/kubernetes/ssl \
--network-plugin=cni \
--cni-conf-dir=/etc/cni/net.d \
--cni-bin-dir=/opt/kubernetes/bin/cni \
--cluster-dns=10.1.0.2 \
--cluster-domain=cluster.local. \
--hairpin-mode hairpin-veth \
--allow-privileged=true \
--fail-swap-on=false \
--logtostderr=true \
--v=2 \
--logtostderr=false \
--log-dir=/opt/kubernetes/log
Restart=on-failure
RestartSec=5
# scp /usr/lib/systemd/system/kubelet.service 172.16.1.32:/usr/lib/systemd/system/
# scp /usr/lib/systemd/system/kubelet.service 172.16.1.33:/usr/lib/systemd/system/
4.5.1.10【部署kubelet】启动Kubelet
[root@linux-node2 ~]# systemctl daemon-reload
[root@linux-node2 ~]# systemctl enable kubelet
[root@linux-node2 ~]# systemctl start kubelet
[root@linux-node3 ~]# systemctl daemon-reload
[root@linux-node3 ~]# systemctl enable kubelet
[root@linux-node3 ~]# systemctl start kubelet
4.5.1.11【部署kubelet】查看服务状态
[root@linux-node2 kubernetes]# systemctl status kubelet
4.5.1.12 查看csr请求 注意是在linux-node1上执行。
[root@linux-node1 ~]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-0_w5F1FM_la_SeGiu3Y5xELRpYUjjT2icIFk9gO9KOU 1m kubelet-bootstrap Pending
4.5.1.13【部署kubelet】批准kubelet 的 TLS 证书请求
[root@linux-node1 ~]# kubectl get csr|grep 'Pending' | awk 'NR>0{print $1}'| xargs kubectl certificate approve
certificatesigningrequest.certificates.k8s.io "node-csr-QCgiejwSx_bPgcBLNxHkMHs-lzNAY-bJNgm4skUMqII" approved
执行完毕后,查看节点状态已经是Ready的状态了
[root@linux-node1 ssl]# kubectl get node
NAME STATUS ROLES AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
172.16.1.32 Ready <none> 10m v1.10.1 <none> CentOS Linux 7 (Core) 3.10.0-693.el7.x86_64 docker://19.3.5
172.16.1.33 Ready <none> 10m v1.10.1 <none> CentOS Linux 7 (Core) 3.10.0-693.el7.x86_64 docker://19.3.5
4.5.2.1【部署Kubernetes Proxy】配置kube-proxy使用LVS
[root@linux-node2 ~]# yum install -y ipvsadm ipset conntrack
4.5.2.2【部署Kubernetes Proxy】创建 kube-proxy 证书请求
[root@linux-node1 ~]# cd /usr/local/src/ssl/
[root@linux-node1 ssl]# vim kube-proxy-csr.json
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
4.5.2.3【部署Kubernetes Proxy】生成证书
[root@linux-node1ssl]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem -ca-key=/opt/kubernetes/ssl/ca-key.pem -config=/opt/kubernetes/ssl/ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
4.5.2.4【部署Kubernetes Proxy】分发证书到所有Node节点
[root@linux-node1 ssl]# cp kube-proxy*.pem /opt/kubernetes/ssl/
# scp kube-proxy*.pem 172.16.1.32:/opt/kubernetes/ssl/
# scp kube-proxy*.pem 172.16.1.33:/opt/kubernetes/ssl/
4.5.2.5【部署Kubernetes Proxy】创建kube-proxy配置文件
[root@linux-node1 ssl]# kubectl config set-cluster kubernetes --certificate-authority=/opt/kubernetes/ssl/ca.pem --embed-certs=true --server=https://172.16.1.31:6443 --kubeconfig=kube-proxy.kubeconfig
Cluster "kubernetes" set.
[root@linux-node1 ssl]# kubectl config set-credentials kube-proxy --client-certificate=/opt/kubernetes/ssl/kube-proxy.pem --client-key=/opt/kubernetes/ssl/kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig
User "kube-proxy" set.
[root@linux-node1 ssl]# kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig
Context "default" created.
[root@linux-node1 ssl]# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
Switched to context "default".
4.5.2.6【部署Kubernetes Proxy】分发kubeconfig配置文件
[root@linux-node1 ssl]# cp kube-proxy.kubeconfig /opt/kubernetes/cfg/
# scp kube-proxy.kubeconfig 172.16.1.32:/opt/kubernetes/cfg/
# scp kube-proxy.kubeconfig 172.16.1.33:/opt/kubernetes/cfg/
4.5.2.7【部署Kubernetes Proxy】创建kube-proxy服务配置
[root@linux-node1 ~]# mkdir /var/lib/kube-proxy
# scp -r /var/lib/kube-proxy 172.16.1.32:/var/lib/
# scp -r /var/lib/kube-proxy 172.16.1.33:/var/lib/
#各节点灰色部分 需要修改
[root@k8s-node1 ~]# vim /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/opt/kubernetes/bin/kube-proxy \
--bind-address=172.16.1.31 \
--hostname-override=172.16.1.31 \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig \
--masquerade-all \
--feature-gates=SupportIPVSProxyMode=true \
--proxy-mode=ipvs \
--ipvs-min-sync-period=5s \
--ipvs-sync-period=5s \
--ipvs-scheduler=rr \
--logtostderr=true \
--v=2 \
--logtostderr=false \
--log-dir=/opt/kubernetes/log
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
# scp /usr/lib/systemd/system/kube-proxy.service 172.16.1.32:/usr/lib/systemd/system/
# scp /usr/lib/systemd/system/kube-proxy.service 172.16.1.33:/usr/lib/systemd/system/
4.5.2.8【部署Kubernetes Proxy】启动Kubernetes Proxy(**Node节点启动)
[root@linux-node2 ~]# systemctl daemon-reload
[root@linux-node2 ~]# systemctl enable kube-proxy
[root@linux-node2 ~]# systemctl start kube-proxy
[root@linux-node3 ~]# systemctl daemon-reload
[root@linux-node3 ~]# systemctl enable kube-proxy
[root@linux-node3 ~]# systemctl start kube-proxy
4.5.2.9【部署Kubernetes Proxy】查看服务状态 查看kube-proxy服务状态
[root@linux-node2 scripts]# systemctl status kube-proxy
检查LVS状态
[root@linux-node2 ~]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.0.1:443 rr persistent 10800
-> 172.16.1.31:6443 Masq 1 0 0
如果你在两台实验机器都安装了kubelet和proxy服务,使用下面的命令可以检查状态:
[root@linux-node1 ssl]# kubectl get node
NAME STATUS ROLES AGE VERSION
172.16.1.32 Ready <none> 22m v1.10.1
172.16.1.33 Ready <none> 3m v1.10.1
4.6 flanal网络部署
4.6.1 为Flannel创建证书
[root@linux-node1 ~]#cd /usr/local/src/ssl
[root@linux-node1 ssl]# vim flanneld-csr.json
{
"CN": "flanneld",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
4.6.2 生成证书
[root@linux-node1 ssl]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem -ca-key=/opt/kubernetes/ssl/ca-key.pem -config=/opt/kubernetes/ssl/ca-config.json -profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld
[root@linux-node1 ssl]# ls flanneld*.pem
flanneld-key.pem flanneld.pem
[root@linux-node1 ssl]# ls -l flanneld*.pem
-rw------- 1 root root 1675 Dec 27 18:55 flanneld-key.pem
-rw-r--r-- 1 root root 1391 Dec 27 18:55 flanneld.pem
4.6.3 分发证书
[root@linux-node1 ssl]# cp flanneld*.pem /opt/kubernetes/ssl/
# scp flanneld*.pem 172.16.1.32:/opt/kubernetes/ssl/
# scp flanneld*.pem 172.16.1.33:/opt/kubernetes/ssl/
4.6.4 下载Flannel软件包
[root@linux-node1 ~]# cd /usr/local/src && wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
[root@linux-node1 src]# tar zxf flannel-v0.10.0-linux-amd64.tar.gz
[root@linux-node1 src]# cp flanneld mk-docker-opts.sh /opt/kubernetes/bin/
#复制到linux-node2节点
# scp flanneld mk-docker-opts.sh 172.16.1.32:/opt/kubernetes/bin/
# scp flanneld mk-docker-opts.sh 172.16.1.33:/opt/kubernetes/bin/
#复制对应脚本到/opt/kubernetes/bin目录下。
[root@linux-node1 ~]# wget https://dl.k8s.io/v1.10.1/kubernetes.tar.gz #需要代理上网下载此包
[root@linux-node1 ~]# tar xf kubernetes.tar.gz -C /usr/local/src/ && cd /usr/local/src/kubernetes/cluster/centos/node/bin/
[root@linux-node1 bin]# cp remove-docker0.sh /opt/kubernetes/bin/
# scp remove-docker0.sh 172.16.1.32:/opt/kubernetes/bin/
# scp remove-docker0.sh 172.16.1.33:/opt/kubernetes/bin/
4.6.5 配置Flannel
[root@linux-node1 ~]# vim /opt/kubernetes/cfg/flannel
FLANNEL_ETCD="-etcd-endpoints=https://172.16.1.31:2379,https://172.16.1.32:2379,https://172.16.1.33:2379"
FLANNEL_ETCD_KEY="-etcd-prefix=/kubernetes/network"
FLANNEL_ETCD_CAFILE="--etcd-cafile=/opt/kubernetes/ssl/ca.pem"
FLANNEL_ETCD_CERTFILE="--etcd-certfile=/opt/kubernetes/ssl/flanneld.pem"
FLANNEL_ETCD_KEYFILE="--etcd-keyfile=/opt/kubernetes/ssl/flanneld-key.pem"
#复制配置到其它节点上
# scp /opt/kubernetes/cfg/flannel 172.16.1.32:/opt/kubernetes/cfg/
# scp /opt/kubernetes/cfg/flannel 172.16.1.33:/opt/kubernetes/cfg/
4.6.6 设置Flannel系统服务
[root@linux-node1 ~]# vim /usr/lib/systemd/system/flannel.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
Before=docker.service
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/flannel
ExecStartPre=/opt/kubernetes/bin/remove-docker0.sh
ExecStart=/opt/kubernetes/bin/flanneld ${FLANNEL_ETCD} ${FLANNEL_ETCD_KEY} ${FLANNEL_ETCD_CAFILE} ${FLANNEL_ETCD_CERTFILE} ${FLANNEL_ETCD_KEYFILE}
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -d /run/flannel/docker
Type=notify
[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
复制系统服务脚本到其它节点上
# scp /usr/lib/systemd/system/flannel.service 172.16.1.32:/usr/lib/systemd/system/
# scp /usr/lib/systemd/system/flannel.service 172.16.1.33:/usr/lib/systemd/system/
4.6.7【Flannel CNI集成】下载CNI插件
[root@linux-node1 ~]# wget https://github.com/containernetworking/plugins/releases/download/v0.7.1/cni-plugins-amd64-v0.7.1.tgz
[root@linux-node1 ~]# mkdir /opt/kubernetes/bin/cni
[root@linux-node1 ~]# tar zxf cni-plugins-amd64-v0.7.1.tgz -C /opt/kubernetes/bin/cni
# scp -r /opt/kubernetes/bin/cni 172.16.1.32:/opt/kubernetes/bin/
# scp -r /opt/kubernetes/bin/cni 172.16.1.33:/opt/kubernetes/bin/
4.6.8【Flannel CNI集成】创建Etcd的key
[root@linux-node1 ~]# /opt/kubernetes/bin/etcdctl --ca-file /opt/kubernetes/ssl/ca.pem --cert-file /opt/kubernetes/ssl/flanneld.pem --key-file /opt/kubernetes/ssl/flanneld-key.pem --no-sync -C https://172.16.1.31:2379,https://172.16.1.32:2379,https://172.16.1.33:2379 mk /kubernetes/network/config '{ "Network": "10.2.0.0/16", "Backend": { "Type": "vxlan", "VNI": 1 }}' >/dev/null 2>&1
4.6.9【Flannel CNI集成】启动flannel (所有节点都启动)
[root@linux-node1 ~]# systemctl daemon-reload
[root@linux-node1 ~]# systemctl enable flannel
[root@linux-node1 ~]# chmod +x /opt/kubernetes/bin/*
[root@linux-node1 ~]# systemctl start flannel
4.6.10【Flannel CNI集成】查看服务状态
[root@linux-node1 ~]# systemctl status flannel
4.6.11【Flannel CNI集成】配置Docker使用Flannel
[root@linux-node1 ~]# vim /usr/lib/systemd/system/docker.service
[Unit] #在Unit下面修改After和增加Requires
After=network-online.target firewalld.service flannel.service
Wants=network-online.target
Requires=flannel.service #docker启动 依赖flannel网络
[Service] #增加EnvironmentFile=-/run/flannel/docker
Type=notify
EnvironmentFile=-/run/flannel/docker
ExecStart=/usr/bin/dockerd $DOCKER_OPTS
#将配置复制到另外两个节点
# scp /usr/lib/systemd/system/docker.service 172.16.1.32:/usr/lib/systemd/system/
# scp /usr/lib/systemd/system/docker.service 172.16.1.33:/usr/lib/systemd/system/
4.6.12【Flannel CNI集成】重启Docker (所有节点重启)
[root@linux-node1 ~]# systemctl daemon-reload
[root@linux-node1 ~]# systemctl restart docker
4.7 CoreDNS部署
4.7.1 编写corDNS yaml文件
[root@linux-node1 ~]# vim coredns.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: coredns
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: Reconcile
name: system:coredns
rules:
- apiGroups:
- ""
resources:
- endpoints
- services
- pods
- namespaces
verbs:
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: EnsureExists
name: system:coredns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:coredns
subjects:
- kind: ServiceAccount
name: coredns
namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: EnsureExists
data:
Corefile: |
.:53 {
errors
health
kubernetes cluster.local. in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
proxy . /etc/resolv.conf
cache 30
}
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: coredns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "CoreDNS"
spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
k8s-app: coredns
template:
metadata:
labels:
k8s-app: coredns
spec:
serviceAccountName: coredns
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
- key: "CriticalAddonsOnly"
operator: "Exists"
containers:
- name: coredns
image: coredns/coredns:1.0.6
imagePullPolicy: IfNotPresent
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
args: [ "-conf", "/etc/coredns/Corefile" ]
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
dnsPolicy: Default
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
---
apiVersion: v1
kind: Service
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: coredns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "CoreDNS"
spec:
selector:
k8s-app: coredns
clusterIP: 10.1.0.2
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
4.7.2 部署coredns
[root@linux-node1 ~]# kubectl create -f coredns.yaml
4.7.3 测试DNS是否配置成功
[root@linux-node1 ~]# kubectl run dns-test --rm -it --image=alpine /bin/sh
If you don't see a command prompt, try pressing enter.
/ # ping www.baidu.com -c 2
PING www.baidu.com (61.135.169.125): 56 data bytes
64 bytes from 61.135.169.125: seq=0 ttl=127 time=5.718 ms
64 bytes from 61.135.169.125: seq=1 ttl=127 time=5.695 ms
--- www.baidu.com ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 5.695/5.706/5.718 ms
/ #
4.8 dashboard部署
4.8.1 创建dashboard yaml存放目录【自定义创建】
[root@linux-node1 ~]# mkdir -p /root/dashboard_yaml_dir
4.8.2 编写admin-user-sa-rbac.yaml文件
[root@linux-node1 ~]# vim /root/dashboard_yaml_dir/admin-user-sa-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
4.8.3 编写kubernetes-dashboard.yaml文件
[root@linux-node1 ~]# vim /root/dashboard_yaml_dir/kubernetes-dashboard.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Configuration to deploy release version of the Dashboard UI compatible with
# Kubernetes 1.8.
#
# Example usage: kubectl create -f <this_file>
# ------------------- Dashboard Secret ------------------- #
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kube-system
type: Opaque
---
# ------------------- Dashboard Service Account ------------------- #
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
---
# ------------------- Dashboard Role & Role Binding ------------------- #
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
rules:
# Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create"]
# Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create"]
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
---
# ------------------- Dashboard Deployment ------------------- #
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
#image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3
image: mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.8.3
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 30001
selector:
k8s-app: kubernetes-dashboard
type: NodePort
4.8.4 编写ui-admin-rbac.yaml文件
[root@linux-node1 ~]# vim /root/dashboard_yaml_dir/ui-admin-rbac.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: ui-admin
rules:
- apiGroups:
- ""
resources:
- services
- services/proxy
verbs:
- '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: ui-admin-binding
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ui-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: admin
4.8.5 编写ui-read-rbac.yaml文件
[root@linux-node1 ~]# vim /root/dashboard_yaml_dir/ui-read-rbac.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: ui-read
rules:
- apiGroups:
- ""
resources:
- services
- services/proxy
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: ui-read-binding
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ui-read
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: readonly
4.8.6 创建Dashboard
[root@linux-node1 ~]# kubectl create -f /root/dashboard_yaml_dir/
[root@linux-node1 ~]# kubectl cluster-info
Kubernetes master is running at https://172.16.1.31:6443
kubernetes-dashboard is running at
https://172.16.1.31:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
4.8.7 访问Dashboard
https://172.16.1.31:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
用户名:admin 密码:admin 选择Token令牌模式登录。
4.8.8 获取Token
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
来源:oschina
链接:https://my.oschina.net/u/4296496/blog/3351306