1、什么是有状态服务和无状态服务?
对服务器程序来说,究竟是有状态服务,还是无状态服务,其判断依旧是指两个来自相同发起者的请求在服务器端是否具备上下文关系。如果是状态化请求,那么服务器端一般都要保存请求的相关信息,每个请求可以默认地使用以前的请求信息。而对于无状态请求,服务器端所能够处理的过程必须全部来自于请求所携带的信息,以及其他服务器端自身所保存的、并且可以被所有请求所使用的公共信息。
无状态的服务器程序,最著名的就是WEB服务器。每次HTTP请求和以前都没有什么关系,只是获取目标URI。得到目标内容之后,这次连接就被杀死,没有任何痕迹。在后来的发展进程中,逐渐在无状态化的过程中,加入状态化的信息,比如COOKIE。服务端在响应客户端的请求的时候,会向客户端推送一个COOKIE,这个COOKIE记录服务端上面的一些信息。客户端在后续的请求中,可以携带这个COOKIE,服务端可以根据这个COOKIE判断这个请求的上下文关系。COOKIE的存在,是无状态化向状态化的一个过渡手段,他通过外部扩展手段,COOKIE来维护上下文关系。
状态化的服务器有更广阔的应用范围,比如MSN、网络游戏等服务器。他在服务端维护每个连接的状态信息,服务端在接收到每个连接的发送的请求时,可以从本地存储的信息来重现上下文关系。这样,客户端可以很容易使用缺省的信息,服务端也可以很容易地进行状态管理。比如说,当一个用户登录后,服务端可以根据用户名获取他的生日等先前的注册信息;而且在后续的处理中,服务端也很容易找到这个用户的历史信息。
状态化服务器在功能实现方面具有更加强大的优势,但由于他需要维护大量的信息和状态,在性能方面要稍逊于无状态服务器。无状态服务器在处理简单服务方面有优势,但复杂功能方面有很多弊端,比如,用无状态服务器来实现即时通讯服务器,将会是场恶梦。
2、K8s有状态服务和无状态服务的数据持久化有什么区别?
在k8s中,对web这种无状态服务实现数据持久化时,采用我之前的博文:K8s——数据持久化自动创建PV的方式对其实现即可。但是如果对数据库这种有状态的服务使用这种数据持久化方式的话,那么将会有一个很严重的问题,就是当对数据库进行写入操作时,你会发现只能对后端的多个容器中的其中一个容器进行写入,当然,nfs目录下也会有数据库写入的数据,但是,其无法被其他数据库读取到,因为在数据库中有很多影响因素,比如server_id,数据库分区表信息等。
当然,除了数据库之外,还有其他的有状态服务不可以使用上述的数据持久化方式。
3、数据持久化实现方式——StatefullSet
StatefulSet也是一种资源对象(在kubelet 1.5版本之前都叫做PetSet),这种资源对象和RS、RC、Deployment一样,都是Pod控制器。
在Kubernetes中,大多数的Pod管理都是基于无状态、一次性的理念。例如Replication Controller,它只是简单的保证可提供服务的Pod数量。如果一个Pod被认定为不健康的,Kubernetes就会以对待牲畜的态度对待这个Pod——删掉、重建。相比于牲畜应用,PetSet(宠物应用),是由一组有状态的Pod组成,每个Pod有自己特殊且不可改变的ID,且每个Pod中都有自己独一无二、不能删除的数据。
众所周知,相比于无状态应用的管理,有状态应用的管理是非常困难的。有状态的应用需要固定的ID、有自己内部可不见的通信逻辑、特别容器出现剧烈波动等。传统意义上,对有状态应用的管理一般思路都是:固定机器、静态IP、持久化存储等。Kubernetes利用PetSet这个资源,弱化有状态Pet与具体物理设施之间的关联。一个PetSet能够保证在任意时刻,都有固定数量的Pet在运行,且每个Pet都有自己唯一的身份。
一个“有身份”的Pet指的是该Pet中的Pod包含如下特性:
- 静态存储;
- 有固定的主机名,且DNS可寻址(稳定的网络身份,这是通过一种叫 Headless Service 的特殊Service来实现的。 和普通Service相比,Headless Service没有Cluster IP,用于为一个集群内部的每个成员提供一个唯一的DNS名字,用于集群内部成员之间通信 。);
- 一个有序的index(比如PetSet的名字叫mysql,那么第一个启起来的Pet就叫mysql-0,第二个叫mysql-1,如此下去。当一个Pet down掉后,新创建的Pet会被赋予跟原来Pet一样的名字,通过这个名字就能匹配到原来的存储,实现状态保存。)
1、应用举例:
- 数据库应用,如Mysql、PostgreSQL,需要一个固定的ID(用于数据同步)以及外挂一块NFS Volume(持久化存储)。
- 集群软件,如zookeeper、Etcd,需要固定的成员关系。
2、使用限制- 1.4新加功能,1.3及之前版本不可用;
- DNS,要求使用1.4或1.4之后的DNS插件,1.4之前的插件只能解析Service对应的IP,无法解析Pod(HostName)对应的域名;
- 需要持久化数据卷(PV,若为nfs这种无法通过调用API来创建存储的网络存储,数据卷要在创建PetSet之前静态创建;若为aws-ebs、vSphere、openstack Cinder这种可以通过API调用来动态创建存储的虚拟存储,数据卷除了可以通过静态的方式创建以外,还可以通过StorageClass进行动态创建。需要注意的是,动态创建出来的PV,默认的回收策略是delete,及在删除数据的同时,还会把虚拟存储卷删除);
- 删除或缩容PetSet不会删除对应的持久化数据卷,这么做是出于数据安全性的考虑;
- 只能通过手动的方式升级PetSet。
示例
这种方式,与K8s——数据持久化自动创建PV的方式有很多相同点,都需要底层NFS存储、rbac授权账户,nfs-client-Provisioner提供存储,SC存储类这些东西,唯一不同的是,这种针对于有状态服务的数据持久化,并不需要我们手动创建PV。
搭建registry私有仓库
[root@docker-k8s01 ~]# docker run -itd --name registry -p 5000:5000 -v /data/registry:/var/lib/registry --restart always registry
//更改docker配置文件 ,以便指定私有仓库
[root@docker-k8s01 ~]# vim /usr/lib/systemd/system/docker.service
...
ExecStart=/usr/bin/dockerd -H unix:// --insecure-registry 192.168.171.151:5000
[root@docker-k8s01 ~]# vim /usr/lib/systemd/system/docker.service
[root@docker-k8s01 ~]# scp /usr/lib/systemd/system/docker.service docker-k8s02:/usr/lib/systemd/system/docker.service
[root@docker-k8s01 ~]# scp /usr/lib/systemd/system/docker.service docker-k8s03:/usr/lib/systemd/system/
[root@docker-k8s01 ~]# systemctl daemon-reload
[root@docker-k8s01 ~]# systemctl restart docker
搭建NFS服务
[root@docker-k8s01 ~]# yum -y install nfs-utils
[root@docker-k8s01 ~]# cat /etc/exports
/nfsdata *(rw,sync,no_root_squash)
[root@docker-k8s01 ~]# mkdir /nfsdata
[root@docker-k8s01 ~]# systemctl enable rpcbind
[root@docker-k8s01 ~]# systemctl enable nfs-server
[root@docker-k8s01 ~]# systemctl restart nfs-server
[root@docker-k8s01 ~]# systemctl restart rpcbind
[root@docker-k8s01 ~]# showmount -e
Export list for docker-k8s01:
/nfsdata *
至此,准备工作就做好了。
1、使用自定义镜像,创建StatefulSet资源对象,要求每个都做数据持久化。副本数量为6个。数据持久化目录为:/usr/local/apache2/htdocs
创建rbac授权
//编写rbac的yaml文件
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-provisioner
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: nfs-provisioner-runner
namespace: default
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["services", "endpoints"]
verbs: ["get","create","list", "watch","update"]
- apiGroups: ["extensions"]
resources: ["podsecuritypolicies"]
resourceNames: ["nfs-provisioner"]
verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-provisioner
subjects:
- kind: ServiceAccount
name: nfs-provisioner
namespace: default
roleRef:
kind: ClusterRole
name: nfs-provisioner-runner
apiGroup: rbac.authorization.k8s.io
//执行yaml文件
[root@docker-k8s01 ~]# kubectl apply -f rbac.yaml
创建NFS-clinet-Provisioner
//编写yaml文件
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nfs-client-provisioner
namespace: default
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccount: nfs-provisioner
containers:
- name: nfs-client-provisioner
image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: zyz
- name: NFS_SERVER
value: 192.168.171.151
- name: NFS_PATH
value: /nfsdata
volumes:
- name: nfs-client-root
nfs:
server: 192.168.171.151
path: /nfsdata
//执行编写好的yaml文件
[root@docker-k8s01 ~]# kubectl apply -f nfs-deployment.yaml
创建SC(storageClass)
//编写sc的yaml文件
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: test-sc
provisioner: zyz
reclaimPolicy: Retain
[root@docker-k8s01 ~]# kubectl apply -f sc.yaml
创建Pod
//编写yaml文件
apiVersion: v1
kind: Service
metadata:
name: headless-svc
labels:
app: headless-svc
spec:
ports:
- name: testweb
port: 80
selector:
app: headless-pod
clusterIP: None
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: statefulset
spec:
serviceName: headless-svc
replicas: 6
selector:
matchLabels:
app: headless-pod
template:
metadata:
labels:
app: headless-pod
spec:
containers:
- name: testhttpd
image: 192.168.171.151:5000/zyz:v1
ports:
- containerPort: 80
volumeMounts:
- name: test
mountPath: /usr/local/apache2/htdocs
volumeClaimTemplates:
- metadata:
name: test
annotations:
volume.beta.kubernetes.io/storage-class: test-sc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
[root@docker-k8s01 ~]# kubectl apply -f statefulset.yaml
//查看Pod运行状态
[root@docker-k8s01 ~]# kubectl get pod -w
NAME READY STATUS RESTARTS AGE
nfs-client-provisioner-89699f486-qg7qw 1/1 Running 0 33m
statefulset-0 1/1 Running 0 12m
statefulset-1 1/1 Running 0 42s
statefulset-2 1/1 Running 0 36s
statefulset-3 1/1 Running 0 33s
statefulset-4 1/1 Running 0 30s
statefulset-5 1/1 Running 0 26s
//查看pv及pvc的状态是否为bound,是否已经自动创建
[root@docker-k8s01 ~]# kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-050c1c1b-bd54-43ff-a654-1783003d350a 100Mi RWO Delete Bound default/test-statefulset-5 test-sc 2m2s
persistentvolume/pvc-53ba1821-5bed-4258-8291-604ca1656cda 100Mi RWO Delete Bound default/test-statefulset-2 test-sc 2m13s
persistentvolume/pvc-b45b773a-475e-4e47-a670-436a604647d7 100Mi RWO Delete Bound default/test-statefulset-3 test-sc 2m10s
persistentvolume/pvc-c9a44625-15d4-4ce9-8177-2e6c72178dd7 100Mi RWO Delete Bound default/test-statefulset-4 test-sc 2m7s
persistentvolume/pvc-f0f4f4c1-6a00-498d-92a9-b794622dce3e 100Mi RWO Delete Bound default/test-statefulset-0 test-sc 2m22s
persistentvolume/pvc-f7d3c7b2-c16d-4204-b2d7-14d3377b48d1 100Mi RWO Delete Bound default/test-statefulset-1 test-sc 2m19s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/test-statefulset-0 Bound pvc-f0f4f4c1-6a00-498d-92a9-b794622dce3e 100Mi RWO test-sc 13m
persistentvolumeclaim/test-statefulset-1 Bound pvc-f7d3c7b2-c16d-4204-b2d7-14d3377b48d1 100Mi RWO test-sc 2m19s
persistentvolumeclaim/test-statefulset-2 Bound pvc-53ba1821-5bed-4258-8291-604ca1656cda 100Mi RWO test-sc 2m13s
persistentvolumeclaim/test-statefulset-3 Bound pvc-b45b773a-475e-4e47-a670-436a604647d7 100Mi RWO test-sc 2m10s
persistentvolumeclaim/test-statefulset-4 Bound pvc-c9a44625-15d4-4ce9-8177-2e6c72178dd7 100Mi RWO test-sc 2m7s
persistentvolumeclaim/test-statefulset-5 Bound pvc-050c1c1b-bd54-43ff-a654-1783003d350a 100Mi RWO test-sc 2m3s
2、完成之后,要求第0--5个Pod的主目录应该为: Version:--v1
将服务进行扩容:副本数量更新为10个,验证是否会继续为新的Pod创建持久化的PV,PVC
//编写脚本定义首页
[root@docker-k8s01 ~]# cat a.sh
#!/bin/bash
for i in `ls /nfsdata`
do
echo "test---v1" > /nfsdata/${i}/index.html
done
//查看节点IP,随机验证首页文件
[root@docker-k8s01 ~]# kubectl get pod -o wide
[root@docker-k8s01 ~]# curl 10.244.1.3
test---v1
[root@docker-k8s01 ~]# curl 10.244.2.2
test---v1
[root@docker-k8s01 ~]# curl 10.244.2.3
test---v1
[root@docker-k8s01 ~]# curl 10.244.1.4
test---v1
//进行扩容更新
apiVersion: v1
kind: Service
metadata:
name: headless-svc
labels:
app: headless-svc
spec:
ports:
- name: testweb
port: 80
selector:
app: headless-pod
clusterIP: None
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: statefulset
spec:
updateStrategy:
rollingUpdate:
partition: 4
serviceName: headless-svc
replicas: 10
selector:
matchLabels:
app: headless-pod
template:
metadata:
labels:
app: headless-pod
spec:
containers:
- name: testhttpd
image: 192.168.171.151:5000/zyz:v2
ports:
- containerPort: 80
volumeMounts:
- name: test
mountPath: /usr/local/apache2/htdocs
volumeClaimTemplates:
- metadata:
name: test
annotations:
volume.beta.kubernetes.io/storage-class: test-sc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
//查看其更新过程
[root@docker-k8s01 ~]# kubectl get pod -w
NAME READY STATUS RESTARTS AGE
nfs-client-provisioner-89699f486-zqxsm 1/1 Running 0 18m
statefulset-0 1/1 Running 0 17m
statefulset-1 1/1 Running 0 17m
statefulset-2 1/1 Running 0 17m
statefulset-3 1/1 Running 0 16m
statefulset-4 1/1 Running 0 16m
statefulset-5 1/1 Running 0 16m
statefulset-6 1/1 Running 0 7s
statefulset-7 1/1 Running 0 4s
statefulset-8 0/1 Pending 0 1s
statefulset-8 0/1 Pending 0 1s
statefulset-8 0/1 ContainerCreating 0 1s
statefulset-8 1/1 Running 0 3s
statefulset-9 0/1 Pending 0 0s
statefulset-9 0/1 Pending 0 0s
statefulset-9 0/1 Pending 0 2s
statefulset-9 0/1 ContainerCreating 0 2s
statefulset-9 1/1 Running 0 3s
statefulset-5 1/1 Terminating 0 16m
statefulset-5 0/1 Terminating 0 16m
statefulset-5 0/1 Terminating 0 17m
statefulset-5 0/1 Terminating 0 17m
statefulset-5 0/1 Pending 0 0s
statefulset-5 0/1 Pending 0 0s
statefulset-5 0/1 ContainerCreating 0 0s
statefulset-5 1/1 Running 0 2s
statefulset-4 1/1 Terminating 0 17m
statefulset-4 0/1 Terminating 0 17m
statefulset-4 0/1 Terminating 0 17m
statefulset-4 0/1 Terminating 0 17m
statefulset-4 0/1 Pending 0 0s
statefulset-4 0/1 Pending 0 0s
statefulset-4 0/1 ContainerCreating 0 0s
statefulset-4 1/1 Running 0 0s
//查看其为扩容后的容器创建的pv及pvc
[root@docker-k8s01 ~]# kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-2a4e3249-8513-4e42-b883-aca51399d667 100Mi RWO Delete Bound default/test-statefulset-3 test-sc 19m
persistentvolume/pvc-394568f7-5a38-413f-aaf2-4fa9f452a515 100Mi RWO Delete Bound default/test-statefulset-8 test-sc 2m25s
persistentvolume/pvc-40149679-e321-450d-8286-f4cd9a67f20f 100Mi RWO Delete Bound default/test-statefulset-5 test-sc 19m
persistentvolume/pvc-4f0a4fe3-8fa1-4bb9-ab7f-8652e2f4667c 100Mi RWO Delete Bound default/test-statefulset-0 test-sc 19m
persistentvolume/pvc-6c9bb0ed-4705-451b-a75e-2876c83a9cee 100Mi RWO Delete Bound default/test-statefulset-7 test-sc 2m28s
persistentvolume/pvc-88652b42-f9cb-4c83-8660-5edfb2c7476f 100Mi RWO Delete Bound default/test-statefulset-4 test-sc 19m
persistentvolume/pvc-8d20cbb9-6aac-4ea6-8517-bd0c124b37a4 100Mi RWO Delete Bound default/test-statefulset-9 test-sc 2m22s
persistentvolume/pvc-8d65da35-2219-459b-a26d-6e7ee0b7be48 100Mi RWO Delete Bound default/test-statefulset-6 test-sc 2m31s
persistentvolume/pvc-c2efd8ce-f90b-4eef-8f49-cdc55c375701 100Mi RWO Delete Bound default/test-statefulset-2 test-sc 19m
persistentvolume/pvc-e2a2b826-b8c3-485f-89ce-d0e54b3012f2 100Mi RWO Delete Bound default/test-statefulset-1 test-sc 19m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/test-statefulset-0 Bound pvc-4f0a4fe3-8fa1-4bb9-ab7f-8652e2f4667c 100Mi RWO test-sc 19m
persistentvolumeclaim/test-statefulset-1 Bound pvc-e2a2b826-b8c3-485f-89ce-d0e54b3012f2 100Mi RWO test-sc 19m
persistentvolumeclaim/test-statefulset-2 Bound pvc-c2efd8ce-f90b-4eef-8f49-cdc55c375701 100Mi RWO test-sc 19m
persistentvolumeclaim/test-statefulset-3 Bound pvc-2a4e3249-8513-4e42-b883-aca51399d667 100Mi RWO test-sc 19m
persistentvolumeclaim/test-statefulset-4 Bound pvc-88652b42-f9cb-4c83-8660-5edfb2c7476f 100Mi RWO test-sc 19m
persistentvolumeclaim/test-statefulset-5 Bound pvc-40149679-e321-450d-8286-f4cd9a67f20f 100Mi RWO test-sc 19m
persistentvolumeclaim/test-statefulset-6 Bound pvc-8d65da35-2219-459b-a26d-6e7ee0b7be48 100Mi RWO test-sc 2m31s
persistentvolumeclaim/test-statefulset-7 Bound pvc-6c9bb0ed-4705-451b-a75e-2876c83a9cee 100Mi RWO test-sc 2m28s
persistentvolumeclaim/test-statefulset-8 Bound pvc-394568f7-5a38-413f-aaf2-4fa9f452a515 100Mi RWO test-sc 2m25s
persistentvolumeclaim/test-statefulset-9 Bound pvc-8d20cbb9-6aac-4ea6-8517-bd0c124b37a4 100Mi RWO test-sc 2m22s
[root@docker-k8s01 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nfs-client-provisioner-89699f486-zqxsm 1/1 Running 0 21m 10.244.1.2 docker-k8s02 <none> <none>
statefulset-0 1/1 Running 0 20m 10.244.1.3 docker-k8s02 <none> <none>
statefulset-1 1/1 Running 0 19m 10.244.2.2 docker-k8s03 <none> <none>
statefulset-2 1/1 Running 0 19m 10.244.2.3 docker-k8s03 <none> <none>
statefulset-3 1/1 Running 0 19m 10.244.1.4 docker-k8s02 <none> <none>
statefulset-4 1/1 Running 0 2m15s 10.244.2.7 docker-k8s03 <none> <none>
statefulset-5 1/1 Running 0 2m27s 10.244.1.8 docker-k8s02 <none> <none>
statefulset-6 1/1 Running 0 2m52s 10.244.2.5 docker-k8s03 <none> <none>
statefulset-7 1/1 Running 0 2m49s 10.244.1.6 docker-k8s02 <none> <none>
statefulset-8 1/1 Running 0 2m46s 10.244.2.6 docker-k8s03 <none> <none>
statefulset-9 1/1 Running 0 2m43s 10.244.1.7 docker-k8s02 <none> <none>
//访问查看其首页文件
[root@docker-k8s01 ~]# curl 10.244.2.5
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<html>
<head>
<title>Index of /</title>
</head>
<body>
<h1>Index of /</h1>
<ul></ul>
</body></html>
[root@docker-k8s01 ~]# curl 10.244.1.8
test---v1
服务进行更新:在更新过程中,要求id3以后的全部更新为Version:v2
//编写脚本文件
[root@docker-k8s01 ~]# cat b.sh
#!/bin/bash
for i in `ls /nfsdata/`
do
if [ `echo $i | awk -F - '{print $4}'` -gt 3 ]
then
echo "test---v2" > /nfsdata/${i}/index.html
fi
done
[root@docker-k8s01 ~]# sh b.sh
//查看Pod IP,访问验证是否更新成功
[root@docker-k8s01 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nfs-client-provisioner-89699f486-zqxsm 1/1 Running 0 25m 10.244.1.2 docker-k8s02 <none> <none>
statefulset-0 1/1 Running 0 24m 10.244.1.3 docker-k8s02 <none> <none>
statefulset-1 1/1 Running 0 24m 10.244.2.2 docker-k8s03 <none> <none>
statefulset-2 1/1 Running 0 24m 10.244.2.3 docker-k8s03 <none> <none>
statefulset-3 1/1 Running 0 24m 10.244.1.4 docker-k8s02 <none> <none>
statefulset-4 1/1 Running 0 6m36s 10.244.2.7 docker-k8s03 <none> <none>
statefulset-5 1/1 Running 0 6m48s 10.244.1.8 docker-k8s02 <none> <none>
statefulset-6 1/1 Running 0 7m13s 10.244.2.5 docker-k8s03 <none> <none>
statefulset-7 1/1 Running 0 7m10s 10.244.1.6 docker-k8s02 <none> <none>
statefulset-8 1/1 Running 0 7m7s 10.244.2.6 docker-k8s03 <none> <none>
statefulset-9 1/1 Running 0 7m4s 10.244.1.7 docker-k8s02 <none> <none>
//访问statefulset4
[root@docker-k8s01 ~]# curl 10.244.2.7
test---v2
//访问statefulset0
[root@docker-k8s01 ~]# curl 10.244.1.3
test---v1
来源:oschina
链接:https://my.oschina.net/u/4408053/blog/4561059