【推荐】2019 Java 开发者跳槽指南.pdf(吐血整理) >>>
zookeeper
是一个强一致的分布式数据库,由多个节点共同组成一个分布式集群,挂掉任意一个节点,数据库仍然可以正常工作。
独立模式
下载zookeeper
打包文件,并进行解压
➜ ~ tar -xvzf apache-zookeeper-3.5.6-bin.tar.gz
进入zookeeper
的解压目录,重命名conf
目录下的配置文件
➜ apache-zookeeper-3.5.6-bin mv conf/zoo_sample.cfg conf/zoo.cfg
启动zookeeper
,使用start-foreground
启动到前台,方便查看服务的输出信息
➜ apache-zookeeper-3.5.6-bin bin/zkServer.sh start-foreground
仲裁模式
在zoo.cfg
的基础上进行编辑,创建zoo_1.cfg
、zoo_2.cfg
及zoo_3.cfg
需要额外追加的配置信息。冒号分割的第二部分和第三部分为TCP
端口号,分别用于仲裁通讯和群首选举。
server.1=127.0.0.1:2222:2223
server.2=127.0.0.1:3333:3334
server.3=127.0.0.1:4444:4445
当启动一个服务器时,我们需要知道启动的是哪个服务器。zookeeper
通过读取dataDir
下的名为myid
的文件来获取服务器ID
信息。
➜ zookeeper echo 1 > zoo_1/data/myid
➜ zookeeper echo 2 > zoo_2/data/myid
➜ zookeeper echo 3 > zoo_3/data/myid
启动服务,从zoo_1
开始
➜ zoo_1 ~/apache-zookeeper-3.5.6-bin/bin/zkServer.sh start-foreground ./zoo_1.cfg
因为我们只启动了三个zookeeper
中的一个,所以整个服务器还无法运行。
2020-01-01 12:08:37,016 [myid:1] - INFO [QuorumPeer[myid=1](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):QuorumPeer@1193] - LOOKING
2020-01-01 12:08:37,016 [myid:1] - INFO [QuorumPeer[myid=1](plain=/0:0:0:0:0:0:0:0:2181)(secure=disabled):FastLeaderElection@885] - New election. My id = 1, proposed zxid=0x0
2020-01-01 12:08:37,021 [myid:1] - WARN [WorkerSender[myid=1]:QuorumCnxManager@679] - Cannot open channel to 2 at election address /127.0.0.1:3334
java.net.ConnectException: Connection refused (Connection refused)
...
2020-01-01 12:08:37,031 [myid:1] - WARN [WorkerSender[myid=1]:QuorumCnxManager@679] - Cannot open channel to 3 at election address /127.0.0.1:4445
java.net.ConnectException: Connection refused (Connection refused)
...
启动第二个服务器,这样可以构成仲裁的法定人数
➜ zoo_2 ~/apache-zookeeper-3.5.6-bin/bin/zkServer.sh start-foreground ./zoo_2.cfg
服务器二被选择为群首
2020-01-01 12:10:40,802 [myid:2] - INFO [QuorumPeer[myid=2](plain=/0:0:0:0:0:0:0:0:2182)(secure=disabled):Leader@464] - LEADING - LEADER ELECTION TOOK - 54 MS
2020-01-01 12:10:40,804 [myid:2] - INFO [QuorumPeer[myid=2](plain=/0:0:0:0:0:0:0:0:2182)(secure=disabled):FileTxnSnapLog@384] - Snapshotting: 0x0 to /tmp/zookeeper/zoo_2/data/version-2/snapshot.0
2020-01-01 12:10:40,812 [myid:2] - INFO [LearnerHandler-/127.0.0.1:62308:LearnerHandler@406] - Follower sid: 1 : info : 127.0.0.1:2222:2223:participant
2020-01-01 12:10:40,816 [myid:2] - INFO [LearnerHandler-/127.0.0.1:62308:ZKDatabase@295] - On disk txn sync enabled with snapshotSizeFactor 0.33
访问集群
➜ bin ./zkCli.sh -server 127.0.0.1:2181,127.0.0.1:2182,127.0.0.1:2183
发布与订阅的例子
启动一个zk_0
,创建一个临时的znode
节点:
[zk: 127.0.0.1:2181,127.0.0.1:2182,127.0.0.1:2183(CONNECTED) 9] create -e /master "this is master"
Created /master
启动另一个zk_1
, 给znode
设置一个监视点:
[zk: 127.0.0.1:2181,127.0.0.1:2182,127.0.0.1:2183(CONNECTED) 3] ls /master true
'ls path [watch]' has been deprecated. Please use 'ls [-w] path' instead.
[]
再启动另一个zk_2
,给znode
设置一个监视点:
[zk: 127.0.0.1:2181,127.0.0.1:2182,127.0.0.1:2183(CONNECTED) 1] ls /master true
'ls path [watch]' has been deprecated. Please use 'ls [-w] path' instead.
[]
在zk_0
中删除掉master
,zk_1
和zk_2
同时收到删除的通知消息
zk_0:
[zk: 127.0.0.1:2181,127.0.0.1:2182,127.0.0.1:2183(CONNECTED) 10] delete /master
zk_1/zk_2:
[zk: 127.0.0.1:2181,127.0.0.1:2182,127.0.0.1:2183(CONNECTED) 2]
WATCHER::
WatchedEvent state:SyncConnected type:NodeDeleted path:/master
来源:oschina
链接:https://my.oschina.net/u/3017278/blog/3152390