本文主要讲解在centos7下kubentes安装
1 单机安装
1.1 通过minikube安装(官方minikube)
本小节主要讲解通过minikube工具安装一个本地单机kubenetes
1.1.1 minikube 安装
如果需要在虚拟机里边安装的话,需要首先安装虚拟机软件、如 VirtualBox KVM 等,
本文直接安装在计算机中,所以不依赖虚拟机,直接安装的话有些minikube的命令就不支持了,比如minikube docker-env等命令。
minikube安装很简单,只有一个可执行文件
执行如下命令
[root@k8s-1 ~]# curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
[root@k8s-1 ~]# chmod +x minikube
[root@k8s-1 ~]# sudo cp minikube /usr/local/bin
[root@k8s-1 ~]# rm minikube
查看是否安装成功
[root@k8s-1 ~]# minikube version
minikube version: v1.0.1
minikube 常见命令:
- minikube version,查看minikube的版本
- minikube start,启动minikube
- minikube ssh,ssh到虚拟机中
- minikube logs,显示minikube的log
- minikube dashboard,启动minikube dashboard
- minikube ip,显示虚拟机地址
- minikube stop,停止虚拟机
- minikube delete,删除虚拟机
1.1.2 kubenetes安装
安装完minikube后执行如下命令
[root@k8s-1 ~]# minikube start --vm-driver=none
这个会首先下载iso文件,我自linux老是下载一半就卡住了,我是直接下载后放到/root/.minikube/cache/iso/目录里边的。
在执行任务时有时会在Downloading kubeadm 与Downloading kubelet 时出错,出现如下内容,
[root@k8s-1 ~]# minikube start --vm-driver=none
o minikube v1.0.1 on linux (amd64)
i Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
: Restarting existing none VM for "minikube" ...
: Waiting for SSH access ...
- "minikube" IP address is 192.168.110.145
- Configuring Docker as the container runtime ...
- Version of container runtime is 18.09.6
- Preparing Kubernetes environment ...
X Unable to load cached images: loading cached images: loading image /root/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1: stat /root/.minikube/cache/images/gcr.io/s-minikube/storage-provisioner_v1.8.1: no such file or directory
@ Downloading kubeadm v1.14.1
@ Downloading kubelet v1.14.1
Failed to update cluster
X Error: [DOWNLOAD_RESET_BY_PEER] downloading binaries: downloading kubelet: Error downloading kubelet v1.14.1:
应该是google外网没法上,无法下载,按照网上的如下执行就行了,原理我也不清楚,为何会先下载到本地就可以了,估计是minikube先在运行目录查找,有就不下了,我估计手动下载下载,复制到/root/.minikube/cache/v1.14.1/目录也可以。
[root@k8s-1 ~]# curl -Lo kubeadm http://storage.googleapis.com/kubernetes-release/release/v1.14.1/bin/linux/amd64/kubeadm
[root@k8s-1 ~]# curl -Lo kubelet http://storage.googleapis.com/kubernetes-release/release/v1.14.1/bin/linux/amd64/kubelet
[root@k8s-1 ~]# minikube start --vm-driver=none
如果没有代理服务器的话,还是无法连接的外网的,docker无法到google官网拉镜像的,出现如下内容。
X Unable to pull images, which may be OK: running cmd: sudo kubeadm config images pull --config /var/lib/kubeadm.yaml: running command: sudo kubeadm config images pull --config /var/lib/kubeadm.yaml: exit status 1
: Relaunching Kubernetes v1.14.1 using kubeadm ...
: Waiting for pods: apiserver
! Error restarting cluster: wait: waiting for component=kube-apiserver: timed out waiting for the condition
* Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
- https://github.com/kubernetes/minikube/issues/new
一种办法是配置代理服务器,通过--docker-env指定代理地址,我没有试过,执行类似如下命令
[root@k8s-1 ~]# minikube start --vm-driver=none --docker-env HTTP_PROXY=http://192.168.1.102:1080 --docker-env HTTPS_PROXY=https://192.168.1.102:1080
另一种方法,通过修改tag方式
通过如下命令查看所有依赖的images
[root@k8s-1 ~]# kubeadm config images list
I0515 18:43:31.317493 7874 version.go:96] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: unexpected EOF
I0515 18:43:31.317592 7874 version.go:97] falling back to the local client version: v1.14.1
k8s.gcr.io/kube-apiserver:v1.14.1
k8s.gcr.io/kube-controller-manager:v1.14.1
k8s.gcr.io/kube-scheduler:v1.14.1
k8s.gcr.io/kube-proxy:v1.14.1
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1
可以看到都是k8s.gcr.io下的镜像,无法访问只能到阿里云,或者 官方mirrorgooglecontainers/下下载。mirrorgooglecontainers目录是docker官方对google镜像做的映射。
最后全部通过如下命令修改下载镜像tag
[root@k8s-1 ~]# docker tag docker.io/mirrorgooglecontainers/kube-scheduler:v1.14.1 k8s.gcr.io/kube-scheduler:v1.14.1
再次执行
[root@k8s-1 ~]# minikube start --vm-driver=none
如果还是上边类型的错误,就先执行delete,之后再执行start
[root@k8s-1 ~]# minikube delete
[root@k8s-1 ~]# minikube start --vm-driver=none
成功后会出现如下内容:
Verifying component health .....
> Configuring local host environment ...
! The 'none' driver provides limited isolation and may reduce system security and reliability.
! For more information, see:
- https://github.com/kubernetes/minikube/blob/master/docs/vmdriver-none.md
! kubectl and minikube configuration will be stored in /root
! To use kubectl or minikube commands as your own user, you may
! need to relocate them. For example, to overwrite your own settings:
- sudo mv /root/.kube /root/.minikube $HOME
- sudo chown -R $USER $HOME/.kube $HOME/.minikube
i This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
+ kubectl is now configured to use "minikube"
= Done! Thank you for using minikube!
如果安装了kubectl,也可以执行如下命令查看集群信息
[root@k8s-1 ~]# kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority: /root/.minikube/ca.crt
server: https://192.168.110.145:8443
name: minikube
contexts:
- context:
cluster: minikube
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
user:
client-certificate: /root/.minikube/client.crt
client-key: /root/.minikube/client.key
[root@k8s-1 ~]# kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* minikube minikube minikube
[root@k8s-1 ~]# kubectl cluster-info
Kubernetes master is running at https://192.168.110.145:8443
KubeDNS is running at https://192.168.110.145:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@k8s-1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready master 14m v1.14.1
出现问题可以通过
minikube logs
查看,我查看日志时候,会提示镜像k8s.gcr.io/kube-addon-manager:v9.0失败。
1.1.3 kubenetes运行docker镜像
本节通过一个node运行js程序来展示如何运行docker镜像,这个是官方教程里边的示例
(1)编写程序
将这段代码保存在一个名为hellonode的文件夹中,文件名server.js:
server.js
var http = require('http');
var handleRequest = function(request, response) {
console.log('Received request for URL: ' + request.url);
response.writeHead(200);
response.end('Hello World!');
};
var www = http.createServer(handleRequest);
www.listen(8080);
(2)创建Docker容器镜像
在hellonode文件夹中创建一个命名为Dockerfile文件,内容如下
Dockerfile
FROM node:6.9.2
EXPOSE 8080
COPY server.js .
CMD node server.js
创建docker镜像
docker build -t hello-node:v1 .
可以通过docker images查看创建的镜像
(3)创建Deployment
Kubernetes Pod是一个或多个容器组合在一起得共享资源,本教程中的Pod只有一个容器。Kubernetes Deployment 是检查Pod的健康状况,如果它终止,则重新启动一个Pod的容器,Deployment管理Pod的创建和扩展。
使用kubectl run命令创建Deployment来管理Pod。Pod根据hello-node:v1Docker运行容器镜像:
kubectl run hello-node --image=hello-node:v1 --port=8080
查看Deployment:
kubectl get deployments
查看Pod:
kubectl get pods
(4)创建Service
默认情况,这Pod只能通过Kubernetes群集内部IP访问。要使hello-node容器从Kubernetes虚拟网络外部访问,须要使用Kubernetes Service暴露Pod。
我们可以使用kubectl expose命令将Pod暴露到外部环境:
kubectl expose deployment hello-node --type=LoadBalancer
查看刚创建的Service:
[root@k8s-1 ~]# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-node LoadBalancer 10.102.14.136 <pending> 8080:32075/TCP 11s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 28h
可以看到将8080端口映射到宿主机的32075端口,用浏览器打开可看到内容
(5)删除
kubectl delete service hello-node
kubectl delete deployment hello-node
2 集群安装
2.1 安装kubeadm
2.1.1 安装准备,官网介绍
(1)确保Unique hostname, MAC address, and product_uuid for every node
通过如下命令查看hostname、mac、product_uuid是否唯一
[root@k8s-2 ~]# hostname
[root@k8s-2 ~]# ifconfig -a
[root@k8s-2 ~]# sudo cat /sys/class/dmi/id/product_uuid
(2)关闭防火墙
kubernetes需要绑定很多端口,因此需要开发端口,这里 直接关闭防火墙,并去掉开机启动。
[root@k8s-2 ~]# systemctl stop firewalld.service
[root@k8s-2 ~]# systemctl disable firewalld.service
(3)关闭selinux
临时关闭
setenforce 0
永久关闭
vi /etc/selinux/config
将SELINUX=enforcing改为SELINUX=disabled ,设置后需要重启才能生效
官网给出的关闭命令如下:
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
(4)禁用Swap
Kubernetes 1.8开始要求,关闭系统的Swap
swapoff -a
yes | cp /etc/fstab /etc/fstab_bak
cat /etc/fstab_bak |grep -v swap > /etc/fstab
(5)设置时区与时间同步
设置时区
[root@k8s-1 ~]# timedatectl set-timezone Asia/Shanghai
时间同步
通过chrony
安装
[root@k8s-3 ~]# yum -y install chrony
配置
[root@k8s-3 ~]# vi /etc/chrony.conf
将server修改成自己的
启动chrony并添加开机启动项
[root@k8s-1 ~]# systemctl start chronyd
[root@k8s-1 ~]# systemctl enable chronyd
(6) sysctl 配置
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
(7) 确保 br_netfilter模块价值
可以通过命令,查看是否加载
lsmod | grep br_netfilter
,如果没加载,通过如下命令,进行加载
modprobe br_netfilter
2.1.2 安装kubeadm、kubelet
下面在各个节点安装kubeadm与kubelet,kubectl只在应用的节点安装就行,是一个kubernetes命令行客户端
添加yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum makecache fast
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
启动并添加kubelet开机启动
systemctl enable --now kubelet
2.1.3 kubeadm命令
具体要看官网
(1)init
init命令对master进行初始。init命令主要进行如下任务执行:
preflight Run pre-flight checks
kubelet-start Writes kubelet settings and (re)starts the kubelet
certs Certificate generation
/ca Generates the self-signed Kubernetes CA to provision identities for other Kubernetes components
/apiserver Generates the certificate for serving the Kubernetes API
/apiserver-kubelet-client Generates the Client certificate for the API server to connect to kubelet
/front-proxy-ca Generates the self-signed CA to provision identities for front proxy
/front-proxy-client Generates the client for the front proxy
/etcd-ca Generates the self-signed CA to provision identities for etcd
/etcd-server Generates the certificate for serving etcd
/apiserver-etcd-client Generates the client apiserver uses to access etcd
/etcd-peer Generates the credentials for etcd nodes to communicate with each other
/etcd-healthcheck-client Generates the client certificate for liveness probes to healtcheck etcd
/sa Generates a private key for signing service account tokens along with its public key
kubeconfig Generates all kubeconfig files necessary to establish the control plane and the admin kubeconfig file
/admin Generates a kubeconfig file for the admin to use and for kubeadm itself
/kubelet Generates a kubeconfig file for the kubelet to use *only* for cluster bootstrapping purposes
/controller-manager Generates a kubeconfig file for the controller manager to use
/scheduler Generates a kubeconfig file for the scheduler to use
control-plane Generates all static Pod manifest files necessary to establish the control plane
/apiserver Generates the kube-apiserver static Pod manifest
/controller-manager Generates the kube-controller-manager static Pod manifest
/scheduler Generates the kube-scheduler static Pod manifest
etcd Generates static Pod manifest file for local etcd.
/local Generates the static Pod manifest file for a local, single-node local etcd instance.
upload-config Uploads the kubeadm and kubelet configuration to a ConfigMap
/kubeadm Uploads the kubeadm ClusterConfiguration to a ConfigMap
/kubelet Uploads the kubelet component config to a ConfigMap
upload-certs Upload certificates to kubeadm-certs
mark-control-plane Mark a node as a control-plane
bootstrap-token Generates bootstrap tokens used to join a node to a cluster
addon Installs required addons for passing Conformance tests
/coredns Installs the CoreDNS addon to a Kubernetes cluster
/kube-proxy Installs the kube-proxy addon to a Kubernetes cluster
(2)config
- kubeadm config upload from-file
- kubeadm config upload from-flags
- kubeadm config view
- kubeadm config print init-defaults
- kubeadm config print join-defaults
- kubeadm config migrate
- kubeadm config images list
- kubeadm config images pull
(3)join
命令格式
kubeadm join [api-server-endpoint] [flags]
任务要执行阶段
preflight Run join pre-flight checks
control-plane-prepare Prepares the machine for serving a control plane.
/download-certs [EXPERIMENTAL] Downloads certificates shared among control-plane nodes from the kubeadm-certs Secret
/certs Generates the certificates for the new control plane components
/kubeconfig Generates the kubeconfig for the new control plane components
/control-plane Generates the manifests for the new control plane components
kubelet-start Writes kubelet settings, certificates and (re)starts the kubelet
control-plane-join Joins a machine as a control plane instance
/etcd Add a new local etcd member
/update-status Register the new control-plane node into the ClusterStatus maintained in the kubeadm-config ConfigMap
/mark-control-plane Mark a node as a control-plane
2.2安装单主节点kubernetes集群
2.2.1 验证依赖镜像是否可下载
[root@k8s-2 ~]# kubeadm config images pull
对于国内用户来说,google不能访问,会出现如下问题
I0521 14:48:48.008225 30022 version.go:96] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0521 14:48:48.008392 30022 version.go:97] falling back to the local client version: v1.14.2
解决方法1:
[root@k8s-2 ~]# kubeadm config images list
I0521 14:49:20.089689 30065 version.go:96] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0521 14:49:20.089816 30065 version.go:97] falling back to the local client version: v1.14.2
k8s.gcr.io/kube-apiserver:v1.14.2
k8s.gcr.io/kube-controller-manager:v1.14.2
k8s.gcr.io/kube-scheduler:v1.14.2
k8s.gcr.io/kube-proxy:v1.14.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1
根据输出依赖的镜像,去docker官网(mirrorgooglecontainers/)或阿里云(registry.aliyuncs.com/google_containers/)下载镜像,然后打tag
本文采用这种方式。
解决方法2:具体查看官网Using kubeadm init with a configuration file
使用kubeadm配置文件,通过在配置文件中指定docker仓库地址,便于内网快速部署
[root@k8s-1 ~]# kubeadm config print init-defaults
能查询到很多配置,如,kubernetesVersion、imageRepository等参数。
通过命令
root@k8s-1 ~]# kubeadm config print init-defaults >kubeadm.conf
经所有参数输出到kubeadm.conf的配置文件中,然后修改其中imageRepository地址为docker官网或阿里云的地址。
之后通过配置参数进行初始化
kubeadm config images list --config kubeadm.conf
kubeadm config images pull --config kubeadm.conf
kubeadm init --config kubeadm.conf
注意:通常,我们在执行init
命令时,可能还需要指定advertiseAddress
、--pod-network-cidr
等参数,但是由于我们这里使用kubeadm.conf
配置文件来初始化,就不能在命令行中指定其他参数了,因此需要我们在kubeadm.conf
来设置。比如:我们修改kubeadm.conf
中与--apiserver-advertise-address
参数对应的advertiseAddress
参数。--pod-network-cid参数对应podSubnet。
这种方式没有试过。
2.2.2初始化master节点
执行初始化命令
[root@k8s-1 ~]# kubeadm init --kubernetes-version=v1.14.1 --pod-network-cidr=192.168.0.0/16 --ignore-preflight-errors=all
[init] Using Kubernetes version: v1.14.1
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 0.030359 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node k8s-1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 3n83pz.3uw3bl7w69ddff5d
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.110.145:6443 --token 3n83pz.3uw3bl7w69ddff5d \
--discovery-token-ca-cert-hash sha256:42128c8f226d03a0c72596a242c595f824b50db5de2eb3197bd383d0dddbc06d
添加kubectl连接配置
对于所有用户
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
对于root用户(这个重启就不行了,还是上面的靠谱)
export KUBECONFIG=/etc/kubernetes/admin.conf
2.2.3添加pod网络
为了使pods之间通信需要安装网络插件
您的Pod网络不得与任何主机网络重叠,因为这可能会导致问题。如果您发现网络插件的首选Pod网络与某些主机网络之间发生冲突,您应该考虑使用合适的CIDR替换并在kubeadm init期间使用--pod-network-cidr,或者在网络插件的YAML中替换它。
本文选择calico网络
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
这是通过如下命令,查看coreDNS状态为running
[root@k8s-1 ~]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-node-5v6hj 2/2 Running 0 96s
kube-system coredns-fb8b8dccf-v4v78 1/1 Running 0 115m
kube-system coredns-fb8b8dccf-x5dg7 1/1 Running 0 115m
kube-system etcd-k8s-1 1/1 Running 0 120m
kube-system kube-apiserver-k8s-1 1/1 Running 0 120m
kube-system kube-controller-manager-k8s-1 1/1 Running 0 120m
kube-system kube-proxy-65fs9 1/1 Running 0 115m
kube-system kube-scheduler-k8s-1 1/1 Running 0 120m
修改Pod network网络地址:
发现kubeadm init --pod-network-cidr的值是192.168.0.0/16,与主机ip地址有冲突,主机在192.168.110网段,因此需要修改pod-network-cidr的值
首先下载并修改calico.yaml的网络地址为192.168.1.0/24,之后通过
kubectl apply -f calico.yaml
之后修改pod-network-cidr的值与caliao.yaml中的值对应
[root@k8s-1 ~]# kubeadm config upload from-flags --pod-network-cidr=192.168.1.0/24
可通过命令查看配置信息
[root@k8s-1 ~]# kubeadm config view
说明:这样改好像不行,因为修改后我安装dashboard时候只能在master节点启动,如果在nodes节点启动会找不到apiserver地址,而且192.168.1.0/24这个地址好像不行,用这个地址重新kubeadm init后,创建dashboard容器不成功。最后用的10.244.0.0/16这个地址。
2.2.4 主节点隔离控制
默认情况下,出于安全原因,您的群集不会在主服务器上安排pod。 如果您希望能够在主服务器上安排pod,例如 对于用于开发的单机Kubernetes集群,运行:
kubectl taint nodes --all node-role.kubernetes.io/master-
输出的内容,差不多下面这样
node "test-01" untainted
taint "node-role.kubernetes.io/master:" not found
taint "node-role.kubernetes.io/master:" not found
2.2.5 添加工作nodes
工作节点是容器运行的地方,需要在每一个节点上运行如下命令,使节点加入集群
根据kubeadm init命令输出的内容
如上面的输出:
Then you can join any number of worker nodes by running the following on each as root:
在每个节点上执行如下命令:
kubeadm join 192.168.110.145:6443 --token 3n83pz.3uw3bl7w69ddff5d \
--discovery-token-ca-cert-hash sha256:42128c8f226d03a0c72596a242c595f824b50db5de2eb3197bd383d0dddbc06d
对于不知道token的,可以在master节点执行
kubeadm token create
对于不知道discovery-token-ca-cert-hash的
在master节点执行
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
在主节点通过命令看到添加的节点
[root@k8s-1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-1 Ready master 145m v1.14.2
k8s-2 NotReady <none> 9m23s v1.14.2
k8s-3 NotReady <none> 6m42s v1.14.2
k8s-4 NotReady <none> 6m47s v1.14.2
状态是NotReady
在从节点查看日志
[root@k8s-4 ~]# journalctl -f -u kubelet
-- Logs begin at Sun 2019-05-05 15:27:19 CST. --
May 21 17:17:36 k8s-4 kubelet[7247]: E0521 17:17:36.776015 7247 remote_runtime.go:109] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.1": Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
可以看到还是镜像的问题,在所有从节点上通过tag方式安装pause:3.1,还有kube-proxy镜像。
2.2.6 日志查看方式
多种查看日志的方法,这很重要
tail -f /var/log/messages
/var/log/messages 是这个节点的所有日志信息,当init 或者join中间出错的时候都可以查看他进行寻找
journalctl --unit=kubelet -n 100 --no-pager
输出某个服务最后100行的消息
journalctl -f -u kubelet
输出某个服务的消息,但是注意时间
kubectl describe pods coredns-123344 -n kube-system
输出某个pod的详细信息,kube-system是namespace范围下,
kubectl log coredns-123344 -n kube-system
当确定这个节点有问题,输出一下报错信息
2.2.7 撤销安装(翻译自官网)
要撤消kubeadm所做的事情,首先应该排空节点,并确保节点在关闭之前是空的。
登录到master节点,并允许
kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>
然后,在要删除的节点上,重置所有kubeadm安装状态:
kubeadm reset
重置过程不会重置或清除iptables规则或IPVS表。 如果您想重置iptables,必须手动执行:
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
如果要重置IPVS表,则必须运行以下命令:
ipvsadm -C
2.2.8 部署dashboard
仪表盘是一个kubernetes的web界面,可以通过仪表盘在kubernetes机器部署容器化应用,应用故障排除已经集群的资源管理。
(1)安装dashboard
完全按照官网的步骤无法安装成功,需要做一下修改。
首先说明,dashboard应用的pods必须运行在master节点上,开始我运行在nodes节点上,会报错,APIserver无法连接,错误如下。指定--apiserver-host的地址也不行。
Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service account's configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get http://192.168.10.144:6443/version: dial tcp 192.168.110.145:6443: i/o timeout
首先下载kubernetes-dashboard.yaml文件
wget https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml
之后修改kubernetes-dashboard.yaml文件内容如下:
将 Dashboard Deployment 部分下的template信息修改成如下:即添加nodeName是pods部署在master节点上,修改image将google替换成阿里云
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
nodeName: k8s-1
containers:
- name: kubernetes-dashboard
image: registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1
ports:
- containerPort: 8443
protocol: TCP
args:
将Dashboard Service下边的spec修改成如下:即添加type为NodePort,添加nodePort为30001,这样就可以通过外部IP地址访问dashboard,要不然只能通过kubernetes集群内部的机器访问。
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 30001
修改好之后执行命令
[root@k8s-1 ~]# kubectl create -f kubernetes-dashboard.yaml
输出:
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created
之后执行命令,查看是否启动成功
[root@k8s-1 ~]# kubectl -n kube-system get pods/deployments
(2)创建用户
安装官网的步骤操作即可
创建文件dashboard-adminuser.yaml,在文件内输入如下内容:
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
注意上面的文件中只创建了ServiceAccount与ClusterRoleBinding。之所以如此美艳常见role,是因为通过kops
or kubeadm
工具创建的集群,ClusterRole
admin-Role
已经在集群中创建好了,可以直接用
之后执行如下命令;
kubectl apply -f dashboard-adminuser.yaml
获取token方式1:
通过如下命令获取登录用的token
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
将输出的token复制到web中登录就可以了。
获取token方式2:
执行命令:
[root@k8s-1 ~]# kubectl -n kube-system get secret
根据对应的secret名字执行如下describe获取token,以secret名称admin-user-token-2wrxj为例:
[root@k8s-1 ~]# kubectl -n kube-system describe secret admin-user-token-2wrxj
将输出的token复制到web中登录就可以了。
2.3 高可用kubernetes集群安装
待补充
3 集群升级
3.1通过如下命令查询可升级的版本
[root@k8s-1 ~]# kubeadm upgrade plan
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT AVAILABLE
Kubelet 4 x v1.15.3 v1.15.12
Upgrade to the latest version in the v1.15 series:
COMPONENT CURRENT AVAILABLE
API Server v1.15.3 v1.15.12
Controller Manager v1.15.3 v1.15.12
Scheduler v1.15.3 v1.15.12
Kube Proxy v1.15.3 v1.15.12
CoreDNS 1.3.1 1.3.1
Etcd 3.3.10 3.3.10
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.15.12
Note: Before you can perform this upgrade, you have to update kubeadm to v1.15.12.
看到结果如下,升级kubernetes之前一般需要升级kubeadm
3.2 升级kubeadm
在master节点安装kubeadm、kubelet、kubectl;其他节点安装kubeadm、kubelet
[root@k8s-1 ~]# yum install -y kubeadm-1.15.12-0 kubelet-1.15.12-0 kubectl-1.15.12-0
3.3 安装完成kubeadm后,执行如下命令定制配置信息,这里主要是修改images仓管
[root@k8s-1 ~]# kubeadm config print init-defaults > kubeadm-cof.yaml
[root@k8s-1 ~]# vi kubeadm-cof.yaml
添加阿里云仓库,修改内容 imageRepository: registry.aliyuncs.com/google_containers
3.4 执行升级命令
master节点执行
[root@k8s-1 ~]# kubeadm upgrade apply v1.15.12
最后几行结果
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.15.3". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
其他node节点执行
kubeadm upgrade node
3.4 所有节点执行如下命令
[root@k8s-2 ~]# systemctl daemon-reload
[root@k8s-2 ~]# systemctl restart kubelet
3.5 查看最新版本信息
[root@k8s-1 ~]# kubectl get nodes
[root@k8s-1 ~]# kubectl version
来源:oschina
链接:https://my.oschina.net/u/3825598/blog/4313066