一,规划三台zk服务器构成集群
ip:172.18.1.1 机器名:zk1 对应myid: 1 ip:172.18.1.2 机器名:zk2 对应myid: 2 ip:172.18.1.3 机器名:zk3 对应myid: 3
说明:为什么zookeeper集群的数量需要是单数?
1,为了容错,增删改操作中需要半数以上服务器通过才算成功,
2,防脑裂,一个zookeeper集群中,必需有且只能有一台leader服务器
当leader服务器宕机时,剩下的服务器会通过半数以上投票选出一个新的leader服务器
集群总数共2台时,半数是1,半数以上最少是2,也就是一台也不能宕机 集群总数共3台时,半数是1.5,半数以上最少是2,也就是允许一台能宕机 集群总数共4台时,半数是2,半数以上最少是3,也就是允许一台能宕机 集群总数共5台时,半数是2.5,半数以上最少是3,也就是允许两台能宕机, 集群总数共6台时,半数是3,半数以上最少是4,也就是允许两台能宕机,
可见
允许两台能宕机:5台比6台成本更低
允许一台能宕机:3台比4台成本更低
说明:刘宏缔的架构森林是一个专注架构的博客,地址:https://www.cnblogs.com/architectforest
对应的源码可以访问这里获取: https://github.com/liuhongdi/
说明:作者:刘宏缔 邮箱: 371125307@qq.com
二,在每台服务器上安装zookeeper之安装java
说明:jdk的安装包请从java官方站下载
1,解压并安装
[root@zookeeper ~]# cd /usr/local/source/ [root@zookeeper source]# tar -zxvf jdk-13.0.2_linux-x64_bin.tar.gz [root@zookeeper source]# mkdir /usr/local/soft [root@zookeeper source]# mv jdk-13.0.2 /usr/local/soft/
2,配置环境变量:
[root@zookeeper source]# vi /etc/profile
在末尾处添加环境变量:
export JAVA_HOME=/usr/local/soft/jdk-13.0.2 export JRE_HOME=${JAVA_HOME}/jre export CLASS_PATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin
使环境变量生效:
[root@zookeeper source]# source /etc/profile
3,查看版本,检验安装是否生效
[root@zookeeper source]# java --version java 13.0.2 2020-01-14 Java(TM) SE Runtime Environment (build 13.0.2+8) Java HotSpot(TM) 64-Bit Server VM (build 13.0.2+8, mixed mode, sharing)
三,在每台服务器上安装zookeeper之安装zookeeper
1,安装wget,供文件下载时使用:
[root@zookeeper source]# yum install wget
2,下载zookeeper
[root@zookeeper source]# wget https://downloads.apache.org/zookeeper/zookeeper-3.6.0/apache-zookeeper-3.6.0-bin.tar.gz
3,解压,安装
[root@zookeeper source]# tar -zxvf apache-zookeeper-3.6.0-bin.tar.gz [root@zookeeper source]# mv apache-zookeeper-3.6.0-bin/ /usr/local/soft/
4,创建数据和日志目录
[root@zookeeper source]# mkdir -p /data/zookeeper [root@zookeeper source]# mkdir -p /data/zookeeper/data [root@zookeeper source]# mkdir -p /data/zookeeper/datalogs [root@zookeeper source]# mkdir -p /data/zookeeper/logs
说明:
data:数据目录
datalogs:事务日志
logs:zk应用的日志
5,生成配置文件:
[root@zookeeper source]# cd /usr/local/soft/apache-zookeeper-3.6.0-bin/conf/ [root@zookeeper conf]# cp zoo_sample.cfg zoo.cfg
6,设置配置文件:
[root@zookeeper conf]# vi zoo.cfg
内容:
dataDir=/data/zookeeper/data dataLogDir=/data/zookeeper/datalogs admin.enableServer=false
说明:admin.enableServer=false 用来关闭zk内置的web管理器
dataDir 定义了zk的数据目录
dataLogDir 定义了zk的事务日志
7,配置环境变量
[root@zookeeper conf]# vi /etc/profile
在末尾增加以下内容:
export ZK_HOME=/usr/local/soft/apache-zookeeper-3.6.0-bin export PATH=$ZK_HOME/bin:$PATH
使环境变量生效:
[root@zookeeper conf]# source /etc/profile
8,测试启动和停止zookeeper
[root@zookeeper conf]# zkServer.sh start [root@zookeeper conf]# zkServer.sh stop
四,在每台服务器上安装zookeeper之使zookeeper支持systemd
1,修改zkEnv.sh这个脚本
[root@zookeeper conf]# vi /usr/local/soft/apache-zookeeper-3.6.0-bin/bin/zkEnv.sh
添加如下一行
JAVA_HOME=/usr/local/soft/jdk-13.0.2
到ZOOBINDIR=一行的上方
说明:为了解决从systemctl启动时找不到java而报错
2,找到ZOO_LOG_DIR一行,修改为:
ZOO_LOG_DIR="/data/zookeeper/logs"
用来记录zk运行时的日志
3,增加service文件,用来供systemd启动使用:
[root@zookeeper conf]# vi /etc/systemd/system/zookeeper.service
内容:
[Unit] Description=zookeeper.service After=network.target ConditionPathExists=/usr/local/soft/apache-zookeeper-3.6.0-bin/conf/zoo.cfg [Service] Type=forking User=root Group=root ExecStart=/usr/local/soft/apache-zookeeper-3.6.0-bin/bin/zkServer.sh start ExecStop=/usr/local/soft/apache-zookeeper-3.6.0-bin/bin/zkServer.sh stop [Install] WantedBy=multi-user.target
3,测试启动/停止zk:
[root@zookeeper conf]# systemctl daemon-reload [root@zookeeper conf]# systemctl start zookeeper [root@zookeeper conf]# systemctl stop zookeeper
五,在每台服务器上安装zookeeper之查看当前已安装zk的版本:
1,安装nc,查看版本时作为工具使用
[root@zookeeper conf]# yum install nc
2,显示版本时会报错
[root@zookeeper conf]# echo stat|nc 127.0.0.1 2181 stat is not executed because it is not in the whitelist.
解决:
[root@zookeeper conf]# vi /usr/local/soft/apache-zookeeper-3.6.0-bin/bin/zkServer.sh
在这个fi下面添加一行
ZOOMAIN="org.apache.zookeeper.server.quorum.QuorumPeerMain" fi
新加一行:
ZOOMAIN="-Dzookeeper.4lw.commands.whitelist=* ${ZOOMAIN}"
保存退出后重启服务:
[root@zookeeper conf]# systemctl stop zookeeper [root@zookeeper conf]# systemctl start zookeeper
3,再次查看zk版本
[root@zookeeper conf]# echo stat|nc 127.0.0.1 2181 Zookeeper version: 3.6.0--b4c89dc7f6083829e18fae6e446907ae0b1f22d7, built on 02/25/2020 14:38 GMT Clients: /127.0.0.1:47654[0](queued=0,recved=1,sent=0) Latency min/avg/max: 0/0.0/0 Received: 1 Sent: 0 Connections: 1 Outstanding: 0 Zxid: 0x0 Mode: standalone Node count: 5
六,在三台zookeeper服务器上做集群配置
1,配置文件中增加集群配置
[root@zk1 ~]# vi /usr/local/soft/apache-zookeeper-3.6.0-bin/conf/zoo.cfg
添加如下配置内容
#cluster server.1=172.18.1.1:2888:3888 server.2=172.18.1.2:2888:3888 server.3=172.18.1.3:2888:3888
说明:server.n n是一个数字,表示zookeeper服务器的序号
2888是在集群中follower连接到leader时使用的端口,
也就是:leader上开放此端口,follower连接到leader此端口访问数据
3888:集群内进行leader选举时使用的端口
说明:三台机器上都添加此集群配置, 配置内容相同
2,给每个机器指定id
在每台机器上zoo.cfg里dataDir指定的目录下,生成一个id值文件:myid
说明:myid内的值,要和本机ip对应的zoo.cfg中序号一致
zk1(172.18.1.1)上
[root@zk1 ~]# vi /data/zookeeper/data/myid [root@zk1 ~]# more /data/zookeeper/data/myid 1
zk2(172.18.1.2)上
[root@zk2 ~]# vi /data/zookeeper/data/myid [root@zk2 ~]# more /data/zookeeper/data/myid 2
zk3(172.18.1.3)上
[root@zk3 ~]# vi /data/zookeeper/data/myid [root@zk3 ~]# more /data/zookeeper/data/myid 3
七,逐台启动服务器,检查各zookeeper的状态
1,启动zk服务
在三台机器上分别执行:
systemctl start zookeeper
2,分别在三台机器上检查状态:
[root@zk1 ~]# zkServer.sh status ZooKeeper JMX enabled by default Using config: /usr/local/soft/apache-zookeeper-3.6.0-bin/bin/../conf/zoo.cfg Client port found: 2181. Client address: localhost. Mode: follower
它的工作模式是: follower
[root@zk2 ~]# zkServer.sh status ZooKeeper JMX enabled by default Using config: /usr/local/soft/apache-zookeeper-3.6.0-bin/bin/../conf/zoo.cfg Client port found: 2181. Client address: localhost. Mode: leader
它的工作模式是: leader
[root@zk3 ~]# zkServer.sh status ZooKeeper JMX enabled by default Using config: /usr/local/soft/apache-zookeeper-3.6.0-bin/bin/../conf/zoo.cfg Client port found: 2181. Client address: localhost. Mode: follower
它的工作模式是: follower
3,mode也可以用stat这个四字命令查看,例子:
[root@zk1 ~]# echo stat | nc 172.18.1.1 2181 Zookeeper version: 3.6.0--b4c89dc7f6083829e18fae6e446907ae0b1f22d7, built on 02/25/2020 14:38 GMT Clients: /172.18.1.1:59284[0](queued=0,recved=1,sent=0) Latency min/avg/max: 0/1.9375/41 Received: 34 Sent: 33 Connections: 1 Outstanding: 0 Zxid: 0x100000004 Mode: follower Node count: 6
4,单机方式运行的zookeeper服务的mode是什么?
[root@zk /]# zkServer.sh status ZooKeeper JMX enabled by default Using config: /usr/local/soft/apache-zookeeper-3.6.0-bin/bin/../conf/zoo.cfg Client port found: 2181. Client address: localhost. Mode: standalone
可以看到,其Mode是standalone
八,测试:连接到zk服务,创建一个znode
1,在zk3上创建节点:
[root@zk3 ~]# zkCli.sh -server localhost [zk: localhost(CONNECTED) 0] ls / [zookeeper] [zk: localhost(CONNECTED) 1] create /demo 'this is a demo' Created /demo [zk: localhost(CONNECTED) 2] ls / [demo, zookeeper]
2,从zk1上查看节点:
[root@zk1 ~]# zkCli.sh -server localhost [zk: localhost(CONNECTED) 2] get /demo this is a demo
可见创建的节点已同步到了zk1
九,测试:模拟zk集群一个节点发生故障后的处理?
1,当前zk2是leader,我们停掉它,然后看各服务器的mode变化:
停掉zk2
[root@zk2 ~]# systemctl stop zookeeper
zk1上查看状态
[root@zk1 ~]# zkServer.sh status ZooKeeper JMX enabled by default Using config: /usr/local/soft/apache-zookeeper-3.6.0-bin/bin/../conf/zoo.cfg Client port found: 2181. Client address: localhost. Mode: follower
zk1仍然是follower
zk3上查看状态
[root@zk3 ~]# zkServer.sh status ZooKeeper JMX enabled by default Using config: /usr/local/soft/apache-zookeeper-3.6.0-bin/bin/../conf/zoo.cfg Client port found: 2181. Client address: localhost. Mode: leader
zk3变成了leader
2,重新启动zk2:
[root@zk2 ~]# systemctl start zookeeper
再次查看状态,变成了 follower
[root@zk2 ~]# zkServer.sh status ZooKeeper JMX enabled by default Using config: /usr/local/soft/apache-zookeeper-3.6.0-bin/bin/../conf/zoo.cfg Client port found: 2181. Client address: localhost. Mode: follower
3,在zk1上写入数据,从zk2上观察效果
[root@zk1 ~]# zkCli.sh -server localhost [zk: localhost(CONNECTED) 1] create /demo2 'demo2' Created /demo2 [zk: localhost(CONNECTED) 2] get /demo2 demo2
回到zk2
[root@zk2 ~]# zkCli.sh -server localhost [zk: localhost(CONNECTED) 1] get /demo2 demo2
4,结论:zookeeper的集群模式能有效的防止单点故障
十,查看centos的版本
[root@localhost liuhongdi]# cat /etc/redhat-release CentOS Linux release 8.1.1911 (Core)
来源:https://www.cnblogs.com/architectforest/p/12540013.html