Kubernetes 1.11 手动安装并启用ipvs

亡梦爱人 提交于 2019-11-25 20:20:45

ERROR:

#很多博友说搭建之后出现认证的问题,我验证了一下,配置是没有写错的 #原因是51cto的markdown格式有点问题,代码粘贴上来之后出现了不兼容,缩进异常的情况 #评论中出现的:error: unable to upgrade connection: Unauthorized #其实是因为直接复制代码生成的/etc/kubernetes/kubelet-config.yml文件缩进有问题 #文章中已经修改了,为了让大家少踩点坑,这里贴出原文:http://note.youdao.com/noteshare?id=31d9d5db79cc3ae27e72c029b09ac4ab&sub=9489CC3D8A8C44F197A8A421DC7209D7 

环境介绍:

系统:Centos 7.5 1804  内核:3.10.0-862.el7.x86_64  docker版本: 18.06.0-ce  kubernetes版本:v1.11     master      192.168.1.1     node1       192.168.1.2     node2       192.168.1.3  etcd版本:v3.2.22     etcd1       192.168.1.4     etcd2       192.168.1.5     etcd3       192.168.1.6 

一、准备工作

为方便操作,所有操作均以root用户执行
以下操作仅在kubernetes集群节点执行即可

  • 关闭selinux和防火墙
sed -ri 's#(SELINUX=).*#\1disabled#' /etc/selinux/config setenforce 0  systemctl disable firewalld systemctl stop firewalld
  • 关闭swap
swapoff -a
  • 配置转发相关参数,否则可能会出错
cat <<EOF >  /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 vm.swappiness=0 EOF  sysctl --system 
  • 加载ipvs模块
cat << EOF > /etc/sysconfig/modules/ipvs.modules  #!/bin/bash ipvs_modules_dir="/usr/lib/modules/\`uname -r\`/kernel/net/netfilter/ipvs" for i in \`ls \$ipvs_modules_dir | sed  -r 's#(.*).ko.xz#\1#'\`; do     /sbin/modinfo -F filename \$i  &> /dev/null     if [ \$? -eq 0 ]; then         /sbin/modprobe \$i     fi done EOF  chmod +x /etc/sysconfig/modules/ipvs.modules  bash /etc/sysconfig/modules/ipvs.modules 
  • 安装cfssl
#在master节点安装即可!!!  wget -O /bin/cfssl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 wget -O /bin/cfssljson https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 wget -O /bin/cfssl-certinfo  https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 for cfssl in `ls /bin/cfssl*`;do chmod +x $cfssl;done;
  • 安装docker并干掉docker0桥
yum install docker-ce  systemctl start docker  cat << EOF > /etc/docker/daemon.json {   "registry-mirrors": ["https://registry.docker-cn.com"],     "live-restore": true,     "default-shm-size": "128M",     "bridge": "none",     "max-concurrent-downloads": 10,     "oom-score-adjust": -1000,     "debug": false }    EOF   systemctl restart docker  #重启后执行ip a命令,看不到docker0的网卡即可 

二、安装etcd

  • 准备etcd证书

    在master节点上操作

mkdir -pv $HOME/ssl && cd $HOME/ssl  cat > ca-config.json << EOF {   "signing": {     "default": {       "expiry": "87600h"     },     "profiles": {       "kubernetes": {         "usages": [             "signing",             "key encipherment",             "server auth",             "client auth"         ],         "expiry": "87600h"       }     }   } } EOF  cat > etcd-ca-csr.json << EOF {   "CN": "etcd",   "key": {     "algo": "rsa",     "size": 2048   },   "names": [     {       "C": "CN",       "ST": "Shenzhen",       "L": "Shenzhen",       "O": "etcd",       "OU": "Etcd Security"     }   ] } EOF  cat > etcd-csr.json << EOF {     "CN": "etcd",     "hosts": [       "127.0.0.1",       "192.168.1.4",       "192.168.1.5",       "192.168.1.6"     ],     "key": {         "algo": "rsa",         "size": 2048     },     "names": [         {             "C": "CN",             "ST": "Shenzhen",             "L": "Shenzhen",             "O": "etcd",             "OU": "Etcd Security"         }     ] } EOF  #生成证书并复制证书至其他etcd节点  cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare etcd-ca cfssl gencert -ca=etcd-ca.pem -ca-key=etcd-ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd  mkdir -pv /etc/etcd/ssl cp etcd*.pem /etc/etcd/ssl  scp -r /etc/etcd 192.168.1.4:/etc/ scp -r /etc/etcd 192.168.1.5:/etc/ scp -r /etc/etcd 192.168.1.6:/etc/ 
  • etcd1主机安装并启动etcd
yum install -y etcd   cat << EOF > /etc/etcd/etcd.conf #[Member] #ETCD_CORS="" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" #ETCD_WAL_DIR="" ETCD_LISTEN_PEER_URLS="https://192.168.1.4:2380" ETCD_LISTEN_CLIENT_URLS="https://127.0.0.1:2379,https://192.168.1.4:2379" #ETCD_MAX_SNAPSHOTS="5" #ETCD_MAX_WALS="5" ETCD_NAME="etcd1" #ETCD_SNAPSHOT_COUNT="100000" #ETCD_HEARTBEAT_INTERVAL="100" #ETCD_ELECTION_TIMEOUT="1000" #ETCD_QUOTA_BACKEND_BYTES="0" #ETCD_MAX_REQUEST_BYTES="1572864" #ETCD_GRPC_KEEPALIVE_MIN_TIME="5s" #ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s" #ETCD_GRPC_KEEPALIVE_TIMEOUT="20s" # #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.4:2380" ETCD_ADVERTISE_CLIENT_URLS="https://127.0.0.1:2379,https://192.168.1.4:2379" #ETCD_DISCOVERY="" #ETCD_DISCOVERY_FALLBACK="proxy" #ETCD_DISCOVERY_PROXY="" #ETCD_DISCOVERY_SRV="" ETCD_INITIAL_CLUSTER="etcd1=https://192.168.1.4:2380,etcd2=https://192.168.1.5:2380,etcd3=https://192.168.1.6:2380" ETCD_INITIAL_CLUSTER_TOKEN="BigBoss" #ETCD_INITIAL_CLUSTER_STATE="new" #ETCD_STRICT_RECONFIG_CHECK="true" #ETCD_ENABLE_V2="true" # #[Proxy] #ETCD_PROXY="off" #ETCD_PROXY_FAILURE_WAIT="5000" #ETCD_PROXY_REFRESH_INTERVAL="30000" #ETCD_PROXY_DIAL_TIMEOUT="1000" #ETCD_PROXY_WRITE_TIMEOUT="5000" #ETCD_PROXY_READ_TIMEOUT="0" # #[Security] ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem" ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem" #ETCD_CLIENT_CERT_AUTH="false" ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem" #ETCD_AUTO_TLS="false" ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem" ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem" #ETCD_PEER_CLIENT_CERT_AUTH="false" ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem" #ETCD_PEER_AUTO_TLS="false" # #[Logging] #ETCD_DEBUG="false" #ETCD_LOG_PACKAGE_LEVELS="" #ETCD_LOG_OUTPUT="default" # #[Unsafe] #ETCD_FORCE_NEW_CLUSTER="false" # #[Version] #ETCD_VERSION="false" #ETCD_AUTO_COMPACTION_RETENTION="0" # #[Profiling] #ETCD_ENABLE_PPROF="false" #ETCD_METRICS="basic" # #[Auth] #ETCD_AUTH_TOKEN="simple" EOF  chown -R etcd.etcd /etc/etcd systemctl enable etcd systemctl start etcd 
  • etcd2主机安装并启动etcd
yum install -y etcd   cat << EOF > /etc/etcd/etcd.conf #[Member] #ETCD_CORS="" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" #ETCD_WAL_DIR="" ETCD_LISTEN_PEER_URLS="https://192.168.1.5:2380" ETCD_LISTEN_CLIENT_URLS="https://127.0.0.1:2379,https://192.168.1.5:2379" #ETCD_MAX_SNAPSHOTS="5" #ETCD_MAX_WALS="5" ETCD_NAME="etcd2" #ETCD_SNAPSHOT_COUNT="100000" #ETCD_HEARTBEAT_INTERVAL="100" #ETCD_ELECTION_TIMEOUT="1000" #ETCD_QUOTA_BACKEND_BYTES="0" #ETCD_MAX_REQUEST_BYTES="1572864" #ETCD_GRPC_KEEPALIVE_MIN_TIME="5s" #ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s" #ETCD_GRPC_KEEPALIVE_TIMEOUT="20s" # #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.5:2380" ETCD_ADVERTISE_CLIENT_URLS="https://127.0.0.1:2379,https://192.168.1.5:2379" #ETCD_DISCOVERY="" #ETCD_DISCOVERY_FALLBACK="proxy" #ETCD_DISCOVERY_PROXY="" #ETCD_DISCOVERY_SRV="" ETCD_INITIAL_CLUSTER="etcd1=https://192.168.1.4:2380,etcd2=https://192.168.1.5:2380,etcd3=https://192.168.1.6:2380" ETCD_INITIAL_CLUSTER_TOKEN="BigBoss" #ETCD_INITIAL_CLUSTER_STATE="new" #ETCD_STRICT_RECONFIG_CHECK="true" #ETCD_ENABLE_V2="true" # #[Proxy] #ETCD_PROXY="off" #ETCD_PROXY_FAILURE_WAIT="5000" #ETCD_PROXY_REFRESH_INTERVAL="30000" #ETCD_PROXY_DIAL_TIMEOUT="1000" #ETCD_PROXY_WRITE_TIMEOUT="5000" #ETCD_PROXY_READ_TIMEOUT="0" # #[Security] ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem" ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem" #ETCD_CLIENT_CERT_AUTH="false" ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem" #ETCD_AUTO_TLS="false" ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem" ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem" #ETCD_PEER_CLIENT_CERT_AUTH="false" ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem" #ETCD_PEER_AUTO_TLS="false" # #[Logging] #ETCD_DEBUG="false" #ETCD_LOG_PACKAGE_LEVELS="" #ETCD_LOG_OUTPUT="default" # #[Unsafe] #ETCD_FORCE_NEW_CLUSTER="false" # #[Version] #ETCD_VERSION="false" #ETCD_AUTO_COMPACTION_RETENTION="0" # #[Profiling] #ETCD_ENABLE_PPROF="false" #ETCD_METRICS="basic" # #[Auth] #ETCD_AUTH_TOKEN="simple" EOF  chown -R etcd.etcd /etc/etcd systemctl enable etcd systemctl start etcd 
  • etcd3主机安装并启动etcd
yum install -y etcd   cat << EOF > /etc/etcd/etcd.conf #[Member] #ETCD_CORS="" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" #ETCD_WAL_DIR="" ETCD_LISTEN_PEER_URLS="https://192.168.1.6:2380" ETCD_LISTEN_CLIENT_URLS="https://127.0.0.1:2379,https://192.168.1.6:2379" #ETCD_MAX_SNAPSHOTS="5" #ETCD_MAX_WALS="5" ETCD_NAME="etcd3" #ETCD_SNAPSHOT_COUNT="100000" #ETCD_HEARTBEAT_INTERVAL="100" #ETCD_ELECTION_TIMEOUT="1000" #ETCD_QUOTA_BACKEND_BYTES="0" #ETCD_MAX_REQUEST_BYTES="1572864" #ETCD_GRPC_KEEPALIVE_MIN_TIME="5s" #ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s" #ETCD_GRPC_KEEPALIVE_TIMEOUT="20s" # #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.6:2380" ETCD_ADVERTISE_CLIENT_URLS="https://127.0.0.1:2379,https://192.168.1.6:2379" #ETCD_DISCOVERY="" #ETCD_DISCOVERY_FALLBACK="proxy" #ETCD_DISCOVERY_PROXY="" #ETCD_DISCOVERY_SRV="" ETCD_INITIAL_CLUSTER="etcd1=https://192.168.1.4:2380,etcd2=https://192.168.1.5:2380,etcd3=https://192.168.1.6:2380" ETCD_INITIAL_CLUSTER_TOKEN="BigBoss" #ETCD_INITIAL_CLUSTER_STATE="new" #ETCD_STRICT_RECONFIG_CHECK="true" #ETCD_ENABLE_V2="true" # #[Proxy] #ETCD_PROXY="off" #ETCD_PROXY_FAILURE_WAIT="5000" #ETCD_PROXY_REFRESH_INTERVAL="30000" #ETCD_PROXY_DIAL_TIMEOUT="1000" #ETCD_PROXY_WRITE_TIMEOUT="5000" #ETCD_PROXY_READ_TIMEOUT="0" # #[Security] ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem" ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem" #ETCD_CLIENT_CERT_AUTH="false" ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem" #ETCD_AUTO_TLS="false" ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem" ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem" #ETCD_PEER_CLIENT_CERT_AUTH="false" ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem" #ETCD_PEER_AUTO_TLS="false" # #[Logging] #ETCD_DEBUG="false" #ETCD_LOG_PACKAGE_LEVELS="" #ETCD_LOG_OUTPUT="default" # #[Unsafe] #ETCD_FORCE_NEW_CLUSTER="false" # #[Version] #ETCD_VERSION="false" #ETCD_AUTO_COMPACTION_RETENTION="0" # #[Profiling] #ETCD_ENABLE_PPROF="false" #ETCD_METRICS="basic" # #[Auth] #ETCD_AUTH_TOKEN="simple" EOF  chown -R etcd.etcd /etc/etcd systemctl enable etcd systemctl start etcd 
  • 检查集群状态
#在etcd1节点执行  etcdctl --endpoints "https://127.0.0.1:2379"   --ca-file=/etc/etcd/ssl/etcd-ca.pem  \ --cert-file=/etc/etcd/ssl/etcd.pem   --key-file=/etc/etcd/ssl/etcd-key.pem   cluster-health 

三、准备kubernetes的证书

在master节点操作

  • 创建相关目录
mkdir  $HOME/ssl && cd $HOME/ssl
  • 配置 root ca
cat > ca-csr.json << EOF {   "CN": "kubernetes",   "key": {     "algo": "rsa",     "size": 2048   },   "names": [     {       "C": "CN",       "ST": "Shenzhen",       "L": "Shenzhen",       "O": "k8s",       "OU": "System"     }   ],   "ca": {      "expiry": "87600h"   } } EOF 
  • 生成root ca
cfssl gencert -initca ca-csr.json | cfssljson -bare ca ls ca*.pem 
  • 配置kube-apiserver证书
#10.96.0.1 是 kube-apiserver 指定的 service-cluster-ip-range 网段的第一个IP  cat > kube-apiserver-csr.json << EOF {     "CN": "kube-apiserver",     "hosts": [       "127.0.0.1",       "192.168.1.1",       "192.168.1.2",       "192.168.1.3",       "10.96.0.1",       "kubernetes",       "kubernetes.default",       "kubernetes.default.svc",       "kubernetes.default.svc.cluster",       "kubernetes.default.svc.cluster.local"     ],     "key": {         "algo": "rsa",         "size": 2048     },     "names": [         {             "C": "CN",             "ST": "Shenzhen",             "L": "Shenzhen",             "O": "k8s",             "OU": "System"         }     ] } EOF 
  • 生成kube-apiserver证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver ls kube-apiserver*.pem
  • 配置 kube-controller-manager证书
cat > kube-controller-manager-csr.json << EOF {     "CN": "system:kube-controller-manager",     "hosts": [       "127.0.0.1",       "192.168.1.1",       "192.168.1.2",       "192.168.1.3"     ],     "key": {         "algo": "rsa",         "size": 2048     },     "names": [         {             "C": "CN",             "ST": "Shenzhen",             "L": "Shenzhen",             "O": "system:kube-controller-manager",             "OU": "System"         }     ] } EOF
  • 生成kube-controller-manager证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager ls kube-controller-manager*.pem
  • 配置kube-scheduler证书
cat > kube-scheduler-csr.json << EOF {     "CN": "system:kube-scheduler",     "hosts": [       "127.0.0.1",       "192.168.1.1",       "192.168.1.2",       "192.168.1.3"     ],     "key": {         "algo": "rsa",         "size": 2048     },     "names": [         {             "C": "CN",             "ST": "Shenzhen",             "L": "Shenzhen",             "O": "system:kube-scheduler",             "OU": "System"         }     ] } EOF
  • 生成kube-scheduler证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler ls kube-scheduler*.pem
  • 配置 kube-proxy 证书
cat > kube-proxy-csr.json << EOF {     "CN": "system:kube-proxy",     "key": {         "algo": "rsa",         "size": 2048     },     "names": [         {             "C": "CN",             "ST": "Shenzhen",             "L": "Shenzhen",             "O": "system:kube-proxy",             "OU": "System"         }     ] } EOF
  • 生成 kube-proxy 证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy ls kube-proxy*.pem
  • 配置 admin 证书
cat > admin-csr.json << EOF {     "CN": "admin",     "key": {         "algo": "rsa",         "size": 2048     },     "names": [         {             "C": "CN",             "ST": "Shenzhen",             "L": "Shenzhen",             "O": "system:masters",             "OU": "System"         }     ] } EOF
  • 生成 admin 证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin ls admin*.pem
  • 复制生成的证书文件,并分发至其他节点
mkdir -pv /etc/kubernetes/pki cp ca*.pem admin*.pem kube-proxy*.pem kube-scheduler*.pem kube-controller-manager*.pem kube-apiserver*.pem /etc/kubernetes/pki scp -r /etc/kubernetes 192.168.1.2:/etc/ scp -r /etc/kubernetes 192.168.1.3:/etc/

四、开始安装master

  • 下载解压server包并配置环境变量
cd /root wget https://dl.k8s.io/v1.11.1/kubernetes-server-linux-amd64.tar.gz tar -xf kubernetes-server-linux-amd64.tar.gz -C /usr/local mv /usr/local/kubernetes /usr/local/kubernetes-v1.11 ln -s kubernetes-v1.11 /usr/local/kubernetes  cat > /etc/profile.d/kubernetes.sh << EOF k8s_home=/usr/local/kubernetes export PATH=\$k8s_home/server/bin:\$PATH source <(kubectl completion bash) EOF  source /etc/profile.d/kubernetes.sh kubectl version
  • 生成kubeconfig

    • 使用 TLS Bootstrapping
    export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')  cat > /etc/kubernetes/token.csv << EOF ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap" EOF
    • 创建 kubelet bootstrapping kubeconfig
    cd /etc/kubernetes  export KUBE_APISERVER="https://192.168.1.1:6443"  kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kubelet-bootstrap.conf  kubectl config set-credentials kubelet-bootstrap \ --token=${BOOTSTRAP_TOKEN} \ --kubeconfig=kubelet-bootstrap.conf  kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=kubelet-bootstrap.conf  kubectl config use-context default --kubeconfig=kubelet-bootstrap.conf
    • 创建 kube-controller-manager kubeconfig
    export KUBE_APISERVER="https://192.168.1.1:6443"  kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kube-controller-manager.conf  kubectl config set-credentials kube-controller-manager \ --client-certificate=/etc/kubernetes/pki/kube-controller-manager.pem \ --client-key=/etc/kubernetes/pki/kube-controller-manager-key.pem \ --embed-certs=true \ --kubeconfig=kube-controller-manager.conf  kubectl config set-context default \ --cluster=kubernetes \ --user=kube-controller-manager \ --kubeconfig=kube-controller-manager.conf  kubectl config use-context default --kubeconfig=kube-controller-manager.conf
    • 创建 kube-scheduler kubeconfig
    export KUBE_APISERVER="https://192.168.1.1:6443"  kubectl config set-cluster kubernetes \  --certificate-authority=/etc/kubernetes/pki/ca.pem \  --embed-certs=true \  --server=${KUBE_APISERVER} \  --kubeconfig=kube-scheduler.conf  kubectl config set-credentials kube-scheduler \ --client-certificate=/etc/kubernetes/pki/kube-scheduler.pem \ --client-key=/etc/kubernetes/pki/kube-scheduler-key.pem \ --embed-certs=true \ --kubeconfig=kube-scheduler.conf  kubectl config set-context default \ --cluster=kubernetes \ --user=kube-scheduler \ --kubeconfig=kube-scheduler.conf  kubectl config use-context default --kubeconfig=kube-scheduler.conf
    • 创建 kube-proxy kubeconfig
    export KUBE_APISERVER="https://192.168.1.1:6443" kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kube-proxy.conf  kubectl config set-credentials kube-proxy \ --client-certificate=/etc/kubernetes/pki/kube-proxy.pem \ --client-key=/etc/kubernetes/pki/kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.conf  kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.conf  kubectl config use-context default --kubeconfig=kube-proxy.conf
    • 创建 admin kubeconfig
    export KUBE_APISERVER="https://192.168.1.1:6443"  kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=admin.conf  kubectl config set-credentials admin \ --client-certificate=/etc/kubernetes/pki/admin.pem \ --client-key=/etc/kubernetes/pki/admin-key.pem \ --embed-certs=true \ --kubeconfig=admin.conf  kubectl config set-context default \ --cluster=kubernetes \ --user=admin \ --kubeconfig=admin.conf  kubectl config use-context default --kubeconfig=admin.conf
    • 把 kube-proxy.conf 复制到其他节点
    scp kubelet-bootstrap.conf kube-proxy.conf 192.168.1.2:/etc/kubernetes scp kubelet-bootstrap.conf kube-proxy.conf 192.168.1.3:/etc/kubernetes cd $HOME
  • 配置启动kube-apiserver

    • 复制 etcd ca
    mkdir -pv /etc/kubernetes/pki/etcd cd /etc/etcd/ssl cp etcd-ca.pem etcd-key.pem etcd.pem /etc/kubernetes/pki/etcd
    • 生成 service account key
    openssl genrsa -out /etc/kubernetes/pki/sa.key 2048 openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub ls /etc/kubernetes/pki/sa.* cd $HOME
    • 启动文件
    cat > /etc/systemd/system/kube-apiserver.service << EOF [Unit] Description=Kubernetes API Service Documentation=https://github.com/kubernetes/kubernetes After=network.target  [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/apiserver ExecStart=/usr/local/kubernetes/server/bin/kube-apiserver \\       \$KUBE_LOGTOSTDERR \\       \$KUBE_LOG_LEVEL \\       \$KUBE_ETCD_ARGS \\       \$KUBE_API_ADDRESS \\       \$KUBE_SERVICE_ADDRESSES \\       \$KUBE_ADMISSION_CONTROL \\       \$KUBE_APISERVER_ARGS Restart=on-failure Type=notify LimitNOFILE=65536  [Install] WantedBy=multi-user.target EOF 
    • 该配置文件同时被 kube-apiserver, kube-controller-manager, kube-scheduler, kubelet, kube-proxy 使用
    cat > /etc/kubernetes/config << EOF KUBE_LOGTOSTDERR="--logtostderr=true" KUBE_LOG_LEVEL="--v=2" EOF  cat > /etc/kubernetes/apiserver << EOF KUBE_API_ADDRESS="--advertise-address=192.168.1.1" KUBE_ETCD_ARGS="--etcd-servers=https://192.168.1.4:2379,https://192.168.1.5:2379,https://192.168.1.6:2379 --etcd-cafile=/etc/kubernetes/pki/etcd/etcd-ca.pem --etcd-certfile=/etc/kubernetes/pki/etcd/etcd.pem --etcd-keyfile=/etc/kubernetes/pki/etcd/etcd-key.pem" KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.96.0.0/12" KUBE_ADMISSION_CONTROL="--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" KUBE_APISERVER_ARGS="--allow-privileged=true --authorization-mode=Node,RBAC --enable-bootstrap-token-auth=true --token-auth-file=/etc/kubernetes/token.csv --service-node-port-range=30000-32767 --tls-cert-file=/etc/kubernetes/pki/kube-apiserver.pem --tls-private-key-file=/etc/kubernetes/pki/kube-apiserver-key.pem --client-ca-file=/etc/kubernetes/pki/ca.pem --service-account-key-file=/etc/kubernetes/pki/sa.pub --enable-swagger-ui=true --secure-port=6443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --anonymous-auth=false --kubelet-client-certificate=/etc/kubernetes/pki/admin.pem --kubelet-client-key=/etc/kubernetes/pki/admin-key.pem" EOF
    • 启动
    systemctl daemon-reload systemctl enable kube-apiserver systemctl start kube-apiserver systemctl status kube-apiserver
    • 访问测试
    curl -k https://192.168.1.1:6443/  出现一下内容说明搭建成功: { "kind": "Status", "apiVersion": "v1", "metadata": {  }, "status": "Failure", "message": "Unauthorized", "reason": "Unauthorized", "code": 401 }
  • 配置启动kube-controller-manager

    • 启动文件
    cat > /etc/systemd/system/kube-controller-manager.service << EOF Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes  [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/controller-manager ExecStart=/usr/local/kubernetes/server/bin/kube-controller-manager \\       \$KUBE_LOGTOSTDERR \\       \$KUBE_LOG_LEVEL \\       \$KUBECONFIG \\       \$KUBE_CONTROLLER_MANAGER_ARGS Restart=on-failure LimitNOFILE=65536  [Install] WantedBy=multi-user.target EOF
    • 配置文件
    cat >/etc/kubernetes/controller-manager<<EOF KUBECONFIG="--kubeconfig=/etc/kubernetes/kube-controller-manager.conf" KUBE_CONTROLLER_MANAGER_ARGS="--address=127.0.0.1 --cluster-cidr=10.0.0.0/8 --cluster-name=kubernetes --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem --service-account-private-key-file=/etc/kubernetes/pki/sa.key --root-ca-file=/etc/kubernetes/pki/ca.pem --leader-elect=true --use-service-account-credentials=true --node-monitor-grace-period=10s --pod-eviction-timeout=10s --allocate-node-cidrs=true --controllers=*,bootstrapsigner,tokencleaner" EOF
    • 启动
    systemctl daemon-reload systemctl enable kube-controller-manager systemctl start kube-controller-manager systemctl status kube-controller-manager
  • 配置启动kube-scheduler

    • systemctl启动文件
    cat > /etc/systemd/system/kube-scheduler.service << EOF [Unit] Description=Kubernetes Scheduler Plugin Documentation=https://github.com/kubernetes/kubernetes  [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/scheduler ExecStart=/usr/local/kubernetes/server/bin/kube-scheduler \\           \$KUBE_LOGTOSTDERR \\           \$KUBE_LOG_LEVEL \\           \$KUBECONFIG \\           \$KUBE_SCHEDULER_ARGS Restart=on-failure LimitNOFILE=65536  [Install] WantedBy=multi-user.target EOF
    • 配置文件
    cat > /etc/kubernetes/scheduler << EOF KUBECONFIG="--kubeconfig=/etc/kubernetes/kube-scheduler.conf" KUBE_SCHEDULER_ARGS="--leader-elect=true --address=127.0.0.1" EOF
    • 启动
    systemctl daemon-reload systemctl enable kube-scheduler systemctl start kube-scheduler systemctl status kube-scheduler
  • 配置kubectl
rm -rf $HOME/.kube mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config kubectl get node
  • 查看各个组件的状态
kubectl get componentstatuses     [root@master ~]# kubectl get componentstatuses NAME                 STATUS    MESSAGE              ERROR controller-manager   Healthy   ok scheduler            Healthy   ok etcd-1               Healthy   {"health": "true"} etcd-0               Healthy   {"health": "true"} etcd-2               Healthy   {"health": "true"} 
  • 配置kubelet使用bootstrap
kubectl create clusterrolebinding kubelet-bootstrap \ --clusterrole=system:node-bootstrapper \ --user=kubelet-bootstrap

五、配置cni和kubelet

  • master端操作

    • 下载cni包
    cd /root wget https://github.com/containernetworking/plugins/releases/download/v0.7.1/cni-plugins-amd64-v0.7.1.tgz mkdir /opt/cni/bin -p tar -xf cni-plugins-amd64-v0.7.1.tgz -C /opt/cni/bin
    • 配置启动kubelet
    #配置启动文件  cat > /etc/systemd/system/kubelet.service << EOF [Unit] Description=Kubernetes Kubelet Server Documentation=https://github.com/kubernetes/kubernetes After=docker.service Requires=docker.service  [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/kubelet ExecStart=/usr/local/kubernetes/server/bin/kubelet \\           \$KUBE_LOGTOSTDERR \\           \$KUBE_LOG_LEVEL \\           \$KUBELET_CONFIG \\           \$KUBELET_HOSTNAME \\           \$KUBELET_POD_INFRA_CONTAINER \\           \$KUBELET_ARGS Restart=on-failure  [Install] WantedBy=multi-user.target EOF  cat > /etc/kubernetes/config << EOF KUBE_LOGTOSTDERR="--logtostderr=true" KUBE_LOG_LEVEL="--v=2" EOF  cat > /etc/kubernetes/kubelet << EOF KUBELET_HOSTNAME="--hostname-override=192.168.1.1" KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1" KUBELET_CONFIG="--config=/etc/kubernetes/kubelet-config.yml" KUBELET_ARGS="--bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.conf --kubeconfig=/etc/kubernetes/kubelet.conf --cert-dir=/etc/kubernetes/pki --network-plugin=cni --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d" EOF  cat > /etc/kubernetes/kubelet-config.yml << EOF kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 address: 192.168.1.1 port: 10250 cgroupDriver: cgroupfs clusterDNS: - 10.96.0.10 clusterDomain: cluster.local. hairpinMode: promiscuous-bridge serializeImagePulls: false authentication:   x509:      clientCAFile: /etc/kubernetes/pki/ca.pem  anonymous:     enabled: false  webhook:     enabled: false EOF
    • 启动kubelet
    systemctl daemon-reload systemctl enable kubelet systemctl restart kubelet systemctl status kubelet
  • 在node1上操作

    • 下载cni包
    cd /root wget https://dl.k8s.io/v1.11.1/kubernetes-node-linux-amd64.tar.gz wget https://github.com/containernetworking/plugins/releases/download/v0.7.1/cni-plugins-amd64-v0.7.1.tgz  tar -xf kubernetes-node-linux-amd64.tar.gz -C /usr/local/ mv /usr/local/kubernetes /usr/local/kubernetes-v1.11 ln -s kubernetes-v1.11 /usr/local/kubernetes mkdir /opt/cni/bin -p tar -xf cni-plugins-amd64-v0.7.1.tgz -C /opt/cni/bin
    • 配置启动kubelet
    #配置systemctl启动文件 cat > /etc/systemd/system/kubelet.service << EOF [Unit] Description=Kubernetes Kubelet Server Documentation=https://github.com/kubernetes/kubernetes After=docker.service Requires=docker.service  [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/kubelet ExecStart=/usr/local/kubernetes/node/bin/kubelet \\           \$KUBE_LOGTOSTDERR \\           \$KUBE_LOG_LEVEL \\           \$KUBELET_CONFIG \\           \$KUBELET_HOSTNAME \\           \$KUBELET_POD_INFRA_CONTAINER \\           \$KUBELET_ARGS Restart=on-failure  [Install] WantedBy=multi-user.target EOF  cat > /etc/kubernetes/config << EOF KUBE_LOGTOSTDERR="--logtostderr=true" KUBE_LOG_LEVEL="--v=2" EOF  cat > /etc/kubernetes/kubelet << EOF KUBELET_HOSTNAME="--hostname-override=192.168.1.2" KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1" KUBELET_CONFIG="--config=/etc/kubernetes/kubelet-config.yml" KUBELET_ARGS="--bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.conf --kubeconfig=/etc/kubernetes/kubelet.conf --cert-dir=/etc/kubernetes/pki --network-plugin=cni --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d" EOF  cat > /etc/kubernetes/kubelet-config.yml << EOF kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 address: 192.168.1.2 port: 10250 cgroupDriver: cgroupfs clusterDNS: - 10.96.0.10 clusterDomain: cluster.local. hairpinMode: promiscuous-bridge serializeImagePulls: false authentication:   x509:       clientCAFile: /etc/kubernetes/pki/ca.pem   anonymous:       enabled: false   webhook:       enabled: false EOF
    • 启动kubelet
    systemctl daemon-reload systemctl enable kubelet systemctl restart kubelet systemctl status kubelet  
  • 在node2上操作

    • 下载cni包
    cd /root wget https://dl.k8s.io/v1.11.1/kubernetes-node-linux-amd64.tar.gz wget https://github.com/containernetworking/plugins/releases/download/v0.7.1/cni-plugins-amd64-v0.7.1.tgz  tar -xf kubernetes-node-linux-amd64.tar.gz -C /usr/local/ mv /usr/local/kubernetes /usr/local/kubernetes-v1.11 ln -s kubernetes-v1.11 /usr/local/kubernetes mkdir /opt/cni/bin -p tar -xf cni-plugins-amd64-v0.7.1.tgz -C /opt/cni/bin
    • 配置启动kubelet
    #配置systemctl启动文件  cat > /etc/systemd/system/kubelet.service << EOF [Unit] Description=Kubernetes Kubelet Server Documentation=https://github.com/kubernetes/kubernetes After=docker.service Requires=docker.service  [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/kubelet ExecStart=/usr/local/kubernetes/node/bin/kubelet \\           \$KUBE_LOGTOSTDERR \\           \$KUBE_LOG_LEVEL \\           \$KUBELET_CONFIG \\           \$KUBELET_HOSTNAME \\           \$KUBELET_POD_INFRA_CONTAINER \\           \$KUBELET_ARGS Restart=on-failure  [Install] WantedBy=multi-user.target EOF  cat > /etc/kubernetes/config << EOF KUBE_LOGTOSTDERR="--logtostderr=true" KUBE_LOG_LEVEL="--v=2" EOF  cat > /etc/kubernetes/kubelet << EOF KUBELET_HOSTNAME="--hostname-override=192.168.1.3" KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1" KUBELET_CONFIG="--config=/etc/kubernetes/kubelet-config.yml" KUBELET_ARGS="--bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.conf --kubeconfig=/etc/kubernetes/kubelet.conf --cert-dir=/etc/kubernetes/pki --network-plugin=cni --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d" EOF  cat > /etc/kubernetes/kubelet-config.yml << EOF kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 address: 192.168.1.3 port: 10250 cgroupDriver: cgroupfs clusterDNS: - 10.96.0.10 clusterDomain: cluster.local. hairpinMode: promiscuous-bridge serializeImagePulls: false authentication:  x509:      clientCAFile: /etc/kubernetes/pki/ca.pem  anonymous:      enabled: false  webhook:      enabled: false EOF
    • 启动kubelet
    systemctl daemon-reload systemctl enable kubelet systemctl restart kubelet systemctl status kubelet  
  • 通过证书验证添加各个节点
#在master节点操作  kubectl get csr  #通过验证并添加进集群  kubectl get csr | awk '/node/{print $1}' | xargs kubectl certificate approve  ###单独执行命令例子:     kubectl certificate approve node-csr-Yiiv675wUCvQl3HH11jDr0cC9p3kbrXWrxvG3EjWGoE  #查看节点 #此时节点状态为 NotReady,因为还没有配置网络  kubectl get nodes  [root@master ~]#kubectl get nodes    NAME          STATUS     ROLES     AGE       VERSION 192.168.1.1   NotReady   <none>    6s        v1.11.1 192.168.1.2   NotReady   <none>    7s        v1.11.1 192.168.1.3   NotReady   <none>    7s        v1.11.1  # 在node节点查看生成的文件  ls -l /etc/kubernetes/kubelet.conf ls -l /etc/kubernetes/pki/kubelet*

六、配置kube-proxy

- 所有节点都要配置kube-proxy!!!

  • 在master节点操作

    • 安装conntrack-tools
    yum install -y conntrack-tools
    • 启动文件
    cat > /etc/systemd/system/kube-proxy.service << EOF [Unit] Description=Kubernetes Kube-Proxy Server Documentation=https://github.com/kubernetes/kubernetes After=network.target  [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/proxy ExecStart=/usr/local/kubernetes/server/bin/kube-proxy \\       \$KUBE_LOGTOSTDERR \\       \$KUBE_LOG_LEVEL \\       \$KUBECONFIG \\       \$KUBE_PROXY_ARGS Restart=on-failure LimitNOFILE=65536  [Install] WantedBy=multi-user.target EOF  #启用ipvs主要就是把kube-proxy的--proxy-mode配置选项修改为ipvs #并且要启用--masquerade-all,使用iptables辅助ipvs运行  cat > /etc/kubernetes/proxy << EOF KUBECONFIG="--kubeconfig=/etc/kubernetes/kube-proxy.conf" KUBE_PROXY_ARGS="--proxy-mode=ipvs  --masquerade-all=true --cluster-cidr=10.0.0.0/8" EOF
    • 启动
    systemctl daemon-reload systemctl enable kube-proxy systemctl restart kube-proxy systemctl status kube-proxy
  • 在所有的node上操作

    • 安装
    yum install -y conntrack-tools
    • 启动文件
    cat > /etc/systemd/system/kube-proxy.service << EOF [Unit] Description=Kubernetes Kube-Proxy Server Documentation=https://github.com/kubernetes/kubernetes After=network.target  [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/proxy ExecStart=/usr/local/kubernetes/node/bin/kube-proxy \\       \$KUBE_LOGTOSTDERR \\       \$KUBE_LOG_LEVEL \\       \$KUBECONFIG \\       \$KUBE_PROXY_ARGS Restart=on-failure LimitNOFILE=65536  [Install] WantedBy=multi-user.target EOF  #启用ipvs主要就是把kube-proxy的--proxy-mode配置选项修改为ipvs #并且要启用--masquerade-all,使用iptables辅助ipvs运行  cat > /etc/kubernetes/proxy << EOF KUBECONFIG="--kubeconfig=/etc/kubernetes/kube-proxy.conf" KUBE_PROXY_ARGS="--proxy-mode=ipvs --masquerade-all=true --cluster-cidr=10.0.0.0/8" EOF
    • 启动
    systemctl daemon-reload systemctl enable kube-proxy systemctl restart kube-proxy systemctl status kube-proxy

七、设置集群角色

在master节点操作

  • 设置 192.168.1.1 为 master
kubectl label nodes 192.168.1.1 node-role.kubernetes.io/master=
  • 设置 192.168.1.2 - 3 为 node
kubectl label nodes 192.168.1.2 node-role.kubernetes.io/node= kubectl label nodes 192.168.1.3 node-role.kubernetes.io/node=
  • 设置 master 一般情况下不接受负载
kubectl taint nodes 192.168.1.1 node-role.kubernetes.io/master=true:NoSchedule 
  • 查看节点
#此时节点状态为 NotReady #ROLES已经标识出了master和node  kubectl get node  NAME          STATUS     ROLES     AGE       VERSION 192.168.1.1   NotReady   master    1m        v1.11.1 192.168.1.2   NotReady   node      1m        v1.11.1 192.168.1.3   NotReady   node      1m        v1.11.1

八、配置网络

  • 以下网络二选一:

    • 使用flannel网络
    cd /root/ mkdir flannel cd flannel wget https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml  sed -ri 's#("Network": ")10.244.0.0/16#\110.0.0.0/8#' kube-flannel.yml #修改kube-flannel文件中的网段为我们需要的网段  kubectl apply -f .
    • 使用canal网络
    cd /root/ mkdir canal cd canal  wget https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/canal/rbac.yaml wget https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/canal/canal.yaml  sed -ri 's#("Network": ")10.244.0.0/16#\110.0.0.0/8#' canal.yaml #修改cannl文件中的网段为我们需要的网段  kubectl apply -f .
  • 查看网络容器是否为running状态
kubectl get -n kube-system pod -o wide    [root@master ~]# kubectl get -n kube-system pod -o wide      NAME          READY     STATUS    RESTARTS   AGE       IP            NODE canal-74zhp   3/3       Running   0          7m        192.168.1.3   192.168.1.3 canal-cmz2p   3/3       Running   0          7m        192.168.1.1   192.168.1.1 canal-mkcg2   3/3       Running   0          7m        192.168.1.2   192.168.1.2
  • 查看各个节点是否为Ready状态
kubectl get node   [root@master ~]#  NAME          STATUS    ROLES     AGE       VERSION 192.168.1.1   Ready     master    5h        v1.11.1 192.168.1.2   Ready     node      5h        v1.11.1 192.168.1.3   Ready     node      5h        v1.11.1

九、配置使用coredns

#10.96.0.10 是kubelet中配置的dns #安装coredns  cd /root && mkdir coredns && cd coredns wget https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/coredns.yaml.sed wget https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/deploy.sh chmod +x deploy.sh ./deploy.sh -i 10.96.0.10 > coredns.yml kubectl apply -f coredns.yml  #查看  kubectl get svc,pods -n kube-system  [root@master coredns]# kubectl get svc,pods -n kube-system NAME               TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE service/kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP   2m  NAME                           READY     STATUS    RESTARTS   AGE pod/canal-5wkkd                3/3       Running   0          17h pod/canal-6mhhz                3/3       Running   0          17h pod/canal-k7ccs                3/3       Running   2          17h pod/coredns-6975654877-jpqg4   1/1       Running   0          2m pod/coredns-6975654877-lgz9n   1/1       Running   0          2m

十、测试

  • 创建一个nginx 应用,测试应用和dns是否正常
 cd /root && mkdir nginx && cd nginx  cat << EOF > nginx.yaml --- apiVersion: v1 kind: Service metadata:   name: nginx spec:   selector:     app: nginx   type: NodePort   ports:   - port: 80     nodePort: 31000     name: nginx-port     targetPort: 80     protocol: TCP  --- apiVersion: apps/v1 kind: Deployment metadata:   name: nginx spec:   replicas: 2   selector:     matchLabels:       app: nginx   template:     metadata:       name: nginx       labels:         app: nginx     spec:       containers:       - name: nginx         image: nginx         ports:         - containerPort: 80 EOF      
  • 创建一个pod用来测试dns
kubectl run curl --image=radial/busyboxplus:curl -i --tty nslookup kubernetes nslookup nginx curl nginx exit  [ root@curl-87b54756-qf7l9:/ ]$ nslookup kubernetes Server:    10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local  Name:      kubernetes Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local [ root@curl-87b54756-qf7l9:/ ]$ nslookup nginx Server:    10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local  Name:      nginx Address 1: 10.105.93.85 nginx.default.svc.cluster.local [ root@curl-87b54756-qf7l9:/ ]$ curl nginx <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style>     body {         width: 35em;         margin: 0 auto;         font-family: Tahoma, Verdana, Arial, sans-serif;     } </style> </head> ...  [ root@curl-87b54756-qf7l9:/ ]$ exit Session ended, resume using 'kubectl attach curl-87b54756-qf7l9 -c curl -i -t' command when the pod is running 
  • 到etcd节点上执行curl nodeIp:31000 测试集群外部是否能访问nginx
curl 192.168.1.2:31000  [root@node5 ~]# curl 192.168.1.2:31000 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style>     body {         width: 35em;         margin: 0 auto;         font-family: Tahoma, Verdana, Arial, sans-serif;     } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p>  <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p>  <p><em>Thank you for using nginx.</em></p> </body> </html>
  • 安装ipvsadm查看ipvs规则
yum install -y ipvsadm  ipvsadm  [root@master ~]# ipvsadm IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags   -> RemoteAddress:Port           Forward Weight ActiveConn InActConn TCP  master:31000 rr   -> 10.0.0.8:http                Masq    1      0          0   -> 10.0.1.9:http                Masq    1      0          0 TCP  master:31000 rr   -> 10.0.0.8:http                Masq    1      0          0   -> 10.0.1.9:http                Masq    1      0          0 TCP  master:31000 rr   -> 10.0.0.8:http                Masq    1      0          0   -> 10.0.1.9:http                Masq    1      0          0 TCP  master:https rr   -> master:sun-sr-https          Masq    1      2          0 TCP  master:domain rr   -> 10.0.0.3:domain              Masq    1      0          0   -> 10.0.1.3:domain              Masq    1      0          0 TCP  master:http rr   -> 10.0.0.8:http                Masq    1      0          0   -> 10.0.1.9:http                Masq    1      0          0 TCP  localhost:31000 rr   -> 10.0.0.8:http                Masq    1      0          0   -> 10.0.1.9:http                Masq    1      0          0 UDP  master:domain rr   -> 10.0.0.3:domain              Masq    1      0          0   -> 10.0.1.3:domain              Masq    1      0          0 

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!