[ZooKeeper之五] 使用 ZooKeeper 实现主-从模式

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了[ZooKeeper之五] 使用 ZooKeeper 实现主-从模式相关的知识,希望对你有一定的参考价值。

参考技术A

  在 [ZooKeeper之一] ZooKeeper简介 中,介绍了主-从架构,再简单回顾下,首先主-从架构中要有一个主,Master 接受客户端提交任务,同时监测每个 worker 的状态,并将任务分配给 worker 执行,worker 负责执行 Master 分配的任务并返回执行结果,Master 收到执行结果时将其返回给客户端。

(1)主节点选举和故障转移

  主-从模式,首先要有一个主节点,由多个备用节点选举产生,选出主节点后,没当选主节点的备用节点就会设置一个对主节点的监听器,当主节点发生故障时,所有备用节点都会收到通知,并重新选举出新的主节点。

(2)从节点的动态检测

  从节点负责执行主节点分配的任务,为了让主节点能感知到从节点的存在,需要在 ZooKeeper 的某一指定路径下(比如 /workers )创建一个代表工作节点对应的 znode,当某个从节点发生故障时,该 znode 应该被自动删除,所以使用临时节点来创建对应的znode。

(3)客户端和任务

  客户端向系统中提交任务,并等待系统返回执行结果。同样地,我们需要在 ZooKeeper 的某一指定路径下(比如 /tasks )创建znode,每个znode表示一个任务,为了防止系统故障导致提交的任务丢失,所以表示任务的 znode 应该用持久节点。

  接下来,启动好 ZooKeeper 服务端和客户端工具,实现它!


  ZooKeeper 通过多个节点进行同时尝试创建某个znode(比如 /lock ),可以实现一个简单的分布式锁,哪个节点进程成功创建了 /lock ,就说它抢到了锁。锁原语同样可用于确定主节点,假如创建的znode为 /master ,为了防止抢到锁之后主节点挂掉之后,无法重新竞争出新的主节点,需要将 /master 以临时节点的形式创建,从锁的角度看,是先释放锁资源才能让备用节点们去抢锁。

  这里启动多个 zkcli 终端来表示多个不同抢锁的节点

  当一个节点去抢锁竞争主节点时,会遇到两种情况:一种是成功抢到锁;另一种是抢锁失败,提示节点已经存在,这时候需要去设置对应的监听器,这样当锁被释放时,可以收到通知重新抢锁。下面分别用节点1、节点2来表示这两种情况:

  现在关掉节点1的终端,模拟主节点故障的情况,等过了超时时间,可以看到节点2收到通知

  这时候备用节点有机会抢到锁,由于这里只有一个备用节点没人抢,所以成功转正

  主节点需要先创建约定好的目录来放工作节点、任务以及任务分配,并且需要动态监控工作节点和任务的变化,所以还需要设置工作节点目录和任务目录的监听器



  首先需要在 /workers 目录下创建一个子节点,然后从节点需要在 /assign 下创建一个子节点来接收主节点分配的任务,由于子节点需要动态检测分配任务的变化,所以还需要对分配任务目录设置监听器。



  客户端通过在 /tasks 下创建znode来表示一个任务

基于zookeeper的activemq的主从集群配置

项目,要用到消息队列,这里采用activemq,相对使用简单点。这里重点是环境部署。

 

0. 服务器环境

RedHat7
10.90.7.2
10.90.7.10
10.90.2.102

 

1. 下载安装zookeeper

地址:https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/zookeeper-3.3.6/zookeeper-3.3.6.tar.gz

zookeeper的安装,采用一台机器装3个实例,伪集群。其实,搭建真集群,也是没问题的。
在7.10服务器上,安装这3个实例。

 

解压zookeeper。

[[email protected] zookeeper-3.3.6]# pwd
/opt/amq/zookeeper-3.3.6

然后,copy zookeeper-3.3.6 三份为zk1,zk2,zk3

[[email protected] amq]# ll
总用量 64592
drwxr-xr-x 11 root root     4096 7月  22 11:44 zk1
drwxr-xr-x 11 root root     4096 7月  22 11:45 zk2
drwxr-xr-x 11 root root     4096 7月  22 11:49 zk3
drwxr-xr-x 10 www  www      4096 7月  29 2012 zookeeper-3.3.6
-rw-r--r--  1 root root 11833706 7月  22 09:27 zookeeper-3.3.6.tar.gz

在zk1,zk2,zk3的目录下,创建data目录。

修改zk1,zk2,zk3的配置文件。

[[email protected] conf]# pwd
/opt/amq/zk1/conf
[[email protected] conf]# mv zoo_sample.cfg zoo.cfg
[[email protected] conf]# ll
总用量 12
-rw-r--r-- 1 root root  535 7月  22 11:11 configuration.xsl
-rw-r--r-- 1 root root 1698 7月  22 11:11 log4j.properties
-rw-r--r-- 1 root root  477 7月  22 13:07 zoo.cfg

 

修改zk1的zoo.cfg的配置内容为下面的内容:

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
dataDir=/opt/amq/zk1/data
# the port at which the clients will connect
clientPort=2181
server.1=10.90.7.10:2887:3887  
server.2=10.90.7.10:2888:3888  
server.3=10.90.7.10:2889:3889

修改zk2的zoo.cfg的配置内容为下面的内容:

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
dataDir=/opt/amq/zk2/data
# the port at which the clients will connect
clientPort=2182
server.1=10.90.7.10:2887:3887  
server.2=10.90.7.10:2888:3888  
server.3=10.90.7.10:2889:3889

修改zk3的zoo.cfg的配置内容为下面的内容:

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
dataDir=/opt/amq/zk3/data
# the port at which the clients will connect
clientPort=2183
server.1=10.90.7.10:2887:3887  
server.2=10.90.7.10:2888:3888  
server.3=10.90.7.10:2889:3889

 

还有一步,就是在zk1,zk2,zk3的data(这个data目录也是自己创建的)下面创建一个文件myid,内容就是zoo.cfg中的服务器server.x中的数字1,2,3。

[[email protected] data]# pwd
/opt/amq/zk1/data
[[email protected] data]# ll
总用量 8
-rw-r--r-- 1 root root  2 7月  22 11:45 myid
drwxr-xr-x 2 root root 43 7月  22 13:10 version-2
-rw-r--r-- 1 root root  5 7月  22 13:07 zookeeper_server.pid

最后,分别到zk1,zk2,zk3的bin目录,执行启动zookeeper的程序。例如,这里我启动zk1的。

[[email protected] bin]# ./zkServer.sh 
JMX enabled by default
Using config: /opt/amq/zk1/bin/../conf/zoo.cfg
Usage: ./zkServer.sh {start|start-foreground|stop|restart|status|upgrade|print-cmd}

[[email protected] bin]# ./zkServer.sh start

 

到此,zk的3元集群启动完毕。

若是三个不同的机器上启动,配置上只有zoo.cfg中的一点点不同。就是下面的这个信息:
server.A=B:C:D
其 中
A 是一个数字,表示这个是第几号服务器;
B 是这个服务器的 ip地址;
C 表示的是这个服务器与集群中的 Leader 服务器交换信息的端口;
D 表示的是万一集群中的 Leader 服务器挂了,需要一个端口来重新进行选举,选出一个新的 Leader,
而这个端口就是用来执行选举时服务器相互通信的端口。如果是伪集群的配置方式,由于 B 都是一样,所以不同的 Zookeeper 实例通信端口号不能一样,所以要给它们分配不同的端口号。

 

正常启动了的zookeeper的日志是这样子的:

2017-07-22 13:07:52,947 - INFO  [QuorumPeer:/0:0:0:0:0:0:0:0:2181:[email protected]294] - Getting a snapshot from leader
2017-07-22 13:07:52,953 - INFO  [QuorumPeer:/0:0:0:0:0:0:0:0:2181:[email protected]326] - Setting leader epoch 1
2017-07-22 13:07:52,953 - INFO  [QuorumPeer:/0:0:0:0:0:0:0:0:2181:[email protected]256] - Snapshotting: 0
2017-07-22 13:08:21,323 - INFO  [WorkerReceiver Thread:[email protected]496] - Notification: 3 (n.leader), 0 (n.zxid), 1 (n.round), LOOKING (n.state), 3 (n.sid), FOLLOWING (my state)
2017-07-22 13:08:43,360 - INFO  [WorkerReceiver Thread:[email protected]496] - Notification: 3 (n.leader), 0 (n.zxid), 2 (n.round), LOOKING (n.state), 3 (n.sid), FOLLOWING (my state)
2017-07-22 13:10:15,173 - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:[email protected]251] - Accepted socket connection from /10.90.7.10:33006
2017-07-22 13:10:15,181 - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:[email protected]777] - Client attempting to establish new session at /10.90.7.10:33006
2017-07-22 13:10:15,188 - WARN  [QuorumPeer:/0:0:0:0:0:0:0:0:2181:[email protected]116] - Got zxid 0x100000001 expected 0x1
2017-07-22 13:10:15,189 - INFO  [SyncThread:1:[email protected]199] - Creating new log file: log.100000001
2017-07-22 13:10:15,199 - INFO  [CommitProcessor:1:[email protected]1580] - Established session 0x15d68b1dbf90000 with negotiated timeout 30000 for client /10.90.7.10:33006
2017-07-22 13:24:06,656 - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:[email protected]1435] - Closed socket connection for client /10.90.7.10:33006 which had sessionid 0x15d68b1dbf90000
2017-07-22 13:34:35,717 - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:[email protected]251] - Accepted socket connection from /10.90.7.10:33007
2017-07-22 13:34:35,722 - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:[email protected]777] - Client attempting to establish new session at /10.90.7.10:33007
2017-07-22 13:34:35,725 - INFO  [CommitProcessor:1:[email protected]1580] - Established session 0x15d68b1dbf90001 with negotiated timeout 4000 for client /10.90.7.10:33007
2017-07-22 13:48:54,070 - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:[email protected]1435] - Closed socket connection for client /10.90.7.10:33007 which had sessionid 0x15d68b1dbf90001
2017-07-22 14:13:58,300 - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:[email protected]251] - Accepted socket connection from /10.90.7.10:33012
2017-07-22 14:13:58,305 - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:[email protected]777] - Client attempting to establish new session at /10.90.7.10:33012
2017-07-22 14:13:58,307 - INFO  [CommitProcessor:1:[email protected]1580] - Established session 0x15d68b1dbf90002 with negotiated timeout 4000 for client /10.90.7.10:33012
2017-07-22 14:20:39,807 - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:[email protected]251] - Accepted socket connection from /10.90.2.102:51235
2017-07-22 14:20:39,811 - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:[email protected]777] - Client attempting to establish new session at /10.90.2.102:51235
2017-07-22 14:20:39,813 - INFO  [CommitProcessor:1:[email protected]1580] - Established session 0x15d68b1dbf90003 with negotiated timeout 4000 for client /10.90.2.102:51235
2017-07-22 14:23:00,052 - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:[email protected]1435] - Closed socket connection for client /10.90.2.102:51235 which had sessionid 0x15d68b1dbf90003
2017-07-22 14:23:00,385 - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:[email protected]251] - Accepted socket connection from /10.90.2.102:51236
2017-07-22 14:23:00,387 - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:[email protected]777] - Client attempting to establish new session at /10.90.2.102:51236
2017-07-22 14:23:00,389 - INFO  [CommitProcessor:1:[email protected]1580] - Established session 0x15d68b1dbf90004 with negotiated timeout 4000 for client /10.90.2.102:51236
2017-07-22 14:23:44,703 - WARN  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:[email protected]634] - EndOfStreamException: Unable to read additional data from client sessionid 0x15d68b1dbf90002, likely client has closed socket
2017-07-22 14:23:44,704 - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:[email protected]1435] - Closed socket connection for client /10.90.7.10:33012 which had sessionid 0x15d68b1dbf90002
2017-07-22 14:31:19,756 - WARN  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:[email protected]634] - EndOfStreamException: Unable to read additional data from client sessionid 0x15d68b1dbf90004, likely client has closed socket
2017-07-22 14:31:19,758 - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:[email protected]1435] - Closed socket connection for client /10.90.2.102:51236 which had sessionid 0x15d68b1dbf90004
2017-07-22 15:10:51,738 - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:[email protected]251] - Accepted socket connection from /10.90.7.2:17992
2017-07-22 15:10:51,743 - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:[email protected]777] - Client attempting to establish new session at /10.90.7.2:17992
2017-07-22 15:10:51,746 - INFO  [CommitProcessor:1:[email protected]1580] - Established session 0x15d68b1dbf90005 with negotiated timeout 4000 for client /10.90.7.2:17992

 

 

执行一下./zkCli.sh

[[email protected] bin]# ./zkCli.sh 
Connecting to localhost:2181
2017-07-22 16:46:49,278 - INFO  [main:[email protected]97] - Client environment:zookeeper.version=3.3.6-1366786, built on 07/29/2012 06:22 GMT
2017-07-22 16:46:49,280 - INFO  [main:[email protected]97] - Client environment:host.name=localhost
2017-07-22 16:46:49,280 - INFO  [main:[email protected]97] - Client environment:java.version=1.8.0_121
2017-07-22 16:46:49,280 - INFO  [main:[email protected]97] - Client environment:java.vendor=Oracle Corporation
2017-07-22 16:46:49,280 - INFO  [main:[email protected]97] - Client environment:java.home=/usr/java/jdk1.8.0_121/jre
2017-07-22 16:46:49,280 - INFO  [main:[email protected]97] - Client environment:java.class.path=/opt/amq/zk1/bin/../build/classes:/opt/amq/zk1/bin/../build/lib/*.jar:/opt/amq/zk1/bin/../zookeeper-3.3.6.jar:/opt/amq/zk1/bin/../lib/log4j-1.2.15.jar:/opt/amq/zk1/bin/../lib/jline-0.9.94.jar:/opt/amq/zk1/bin/../src/java/lib/*.jar:/opt/amq/zk1/bin/../conf:.:/usr/java/jdk1.8.0_121/lib/dt.jar:/usr/java/jdk1.8.0_121/lib/tools.jar
2017-07-22 16:46:49,281 - INFO  [main:[email protected]] - Client environment:java.library.path=/home/torch/install/lib:/usr/local/cudnn:/usr/local/cuda-8.0/lib64::/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2017-07-22 16:46:49,281 - INFO  [main:[email protected]] - Client environment:java.io.tmpdir=/tmp
2017-07-22 16:46:49,281 - INFO  [main:[email protected]] - Client environment:java.compiler=<NA>
2017-07-22 16:46:49,281 - INFO  [main:[email protected]] - Client environment:os.name=Linux
2017-07-22 16:46:49,281 - INFO  [main:[email protected]] - Client environment:os.arch=amd64
2017-07-22 16:46:49,281 - INFO  [main:[email protected]] - Client environment:os.version=3.10.0-229.el7.x86_64
2017-07-22 16:46:49,282 - INFO  [main:[email protected]] - Client environment:user.name=root
2017-07-22 16:46:49,282 - INFO  [main:[email protected]] - Client environment:user.home=/root
2017-07-22 16:46:49,282 - INFO  [main:[email protected]] - Client environment:user.dir=/opt/amq/zk1/bin
2017-07-22 16:46:49,283 - INFO  [main:[email protected]] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 [email protected]
Welcome to ZooKeeper!
2017-07-22 16:46:49,295 - INFO  [main-SendThread():[email protected]] - Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181
JLine support is enabled
2017-07-22 16:46:49,359 - INFO  [main-SendThread(localhost:2181):[email protected]] - Socket connection established to localhost/0:0:0:0:0:0:0:1:2181, initiating session
2017-07-22 16:46:49,370 - INFO  [main-SendThread(localhost:2181):[email protected]] - Session establishment complete on server localhost/0:0:0:0:0:0:0:1:2181, sessionid = 0x15d68b1dbf90006, negotiated timeout = 30000

WATCHER::

WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0] info
ZooKeeper -server host:port cmd args
        stat path [watch]
        set path data [version]
        ls path [watch]
        delquota [-n|-b] path
        ls2 path [watch]
        setAcl path acl
        setquota -n|-b val path
        history 
        redo cmdno
        printwatches on|off
        delete path [version]
        sync path
        listquota path
        get path [watch]
        create [-s] [-e] path data acl
        addauth scheme auth
        quit 
        getAcl path
        close 
        connect host:port
[zk: localhost:2181(CONNECTED) 2] ls /    
[activemq, zookeeper]
[zk: localhost:2181(CONNECTED) 3] ls /activemq
[leveldb-stores]
[zk: localhost:2181(CONNECTED) 4] ls /zookeeper
[quota]
[zk: localhost:2181(CONNECTED) 5] ls /zookeeper/quota
[]
[zk: localhost:2181(CONNECTED) 6] 

 

2. 下载安装activemq

下载地址:http://archive.apache.org/dist/activemq/5.14.3/apache-activemq-5.14.3-bin.tar.gz

这个比较简单,我在三台机器上安装AMQ。
10.90.7.2
10.90.7.10
10.90.2.102
类似一个tomcat的应用,其实是jetty的web应用。
主要的修改配置文件,就是activemq.xml。

解压apache-activemq-5.14.3-bin.tar.gz,并重命名包名称为mq1(在10.90.7.10),mq2(在10.90.7.2),mq3(在10.90.2.102).
下面以操作mq1为例介绍配置:

[[email protected] amq]# pwd
/opt/amq
[[email protected] amq]# ll
总用量 64592
drwxr-xr-x 10 root root     4096 12月 19 2016 apache-activemq-5.14.3
-rw-r--r--  1 root root 54277759 2月  16 09:47 apache-activemq-5.14.3-bin.tar.gz
drwxr-xr-x 11 root root     4096 7月  22 13:34 mq1
drwxr-xr-x 11 root root     4096 7月  22 11:44 zk1
drwxr-xr-x 11 root root     4096 7月  22 11:45 zk2
drwxr-xr-x 11 root root     4096 7月  22 11:49 zk3
drwxr-xr-x 10 www  www      4096 7月  29 2012 zookeeper-3.3.6
-rw-r--r--  1 root root 11833706 7月  22 09:27 zookeeper-3.3.6.tar.gz

 

进入mq1目录,vim activemq.xml文件。我们这里amq的集群是基于zk的,所以,不要默认的持久化方案。即将原始的

<persistenceAdapter>
      <kahaDB directory="${activemq.data}/kahadb"/>
</persistenceAdapter>

注释掉,采用下面的新内容:

<persistenceAdapter>
    <replicatedLevelDB 
  directory="${activemq.data}/leveldb"
  replicas="3"
  bind="tcp://0.0.0.0:0"
  zkAddress="10.90.7.10:2181,10.90.7.10:2182,10.90.7.10:2183"
  hostname="10.90.7.10"
  sync="local_disk"
  zkPath="/activemq/leveldb-stores"
  />
</persistenceAdapter>

这里,hostname要修改为amq所在机器的IP地址,或者是能够解析的域名。zkAddress是zk集群的地址,即每个zk的IP:port对,之间用逗号分隔。zkPath这里是指定的,所以,在上面的zkCli.sh中可以ls命令看到的内容。

还有,broker这个节点中的brokerName,必须三个amq实例配置都要一样。这里,我配置为tkcss了。

。。。。。。
    <broker xmlns="http://activemq.apache.org/schema/core" brokerName="tkcss" dataDirectory="${activemq.data}">

        <destinationPolicy>
            <policyMap>
              <policyEntries>
                <policyEntry topic=">" >
                    <!-- The constantPendingMessageLimitStrategy is used to prevent
                         slow topic consumers to block producers and affect other consumers
                         by limiting the number of messages that are retained
                         For more information, see:

                         http://activemq.apache.org/slow-consumer-handling.html

                    -->
                  <pendingMessageLimitStrategy>
                    <constantPendingMessageLimitStrategy limit="1000"/>
                  </pendingMessageLimitStrategy>
                </policyEntry>
              </policyEntries>
            </policyMap>
        </destinationPolicy>


        <!--
            The managementContext is used to configure how ActiveMQ is exposed in
            JMX. By default, ActiveMQ uses the MBean server that is started by
            the JVM. For more information, see:

            http://activemq.apache.org/jmx.html
        -->
        <managementContext>
            <managementContext createConnector="false"/>
        </managementContext>

        <!--
            Configure message persistence for the broker. The default persistence
            mechanism is the KahaDB store (identified by the kahaDB tag).
            For more information, see:

            http://activemq.apache.org/persistence.html
        -->
        <!--
        <persistenceAdapter>
            <kahaDB directory="${activemq.data}/kahadb"/>
        </persistenceAdapter>
        -->

        <persistenceAdapter>
            <replicatedLevelDB 
                directory="${activemq.data}/leveldb"
                replicas="3"
                bind="tcp://0.0.0.0:0"
                zkAddress="10.90.7.10:2181,10.90.7.10:2182,10.90.7.10:2183"
                hostname="10.90.7.10"
                sync="local_disk"
               zkPath="/activemq/leveldb-stores"
           />
        </persistenceAdapter>
。。。。。。

 

配置完成后,即可启动amq。

[[email protected] bin]# ./activemq start

 

启动后,正常的日志,这里看看amq3启动的日志:

017-07-22 17:13:39,871 | INFO  | Refreshing org.apache.activemq.xbean.XBeanBrokerFactory$1@3e993445: startup date [Sat Jul 22 17:13:39 CST 2017]; root of context hierarchy | org.apache.activemq.xbean.XBeanBrokerFactory$1 | main
2017-07-22 17:13:40,835 | INFO  | Using Persistence Adapter: Replicated LevelDB[/opt/amq/mq3/data/leveldb, 10.90.7.10:2181,10.90.7.10:2182,10.90.7.10:2183//activemq/leveldb-stores] | org.apache.activemq.broker.BrokerService | main
2017-07-22 17:13:40,892 | INFO  | Starting StateChangeDispatcher | org.apache.activemq.leveldb.replicated.groups.ZKClient | ZooKeeper state change dispatcher thread
2017-07-22 17:13:40,897 | INFO  | Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT | org.apache.zookeeper.ZooKeeper | main
2017-07-22 17:13:40,899 | INFO  | Client environment:host.name=localhost | org.apache.zookeeper.ZooKeeper | main
2017-07-22 17:13:40,901 | INFO  | Client environment:java.version=1.7.0_75 | org.apache.zookeeper.ZooKeeper | main
2017-07-22 17:13:40,903 | INFO  | Client environment:java.vendor=Oracle Corporation | org.apache.zookeeper.ZooKeeper | main
2017-07-22 17:13:40,904 | INFO  | Client environment:java.home=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75-2.5.4.2.el7_0.x86_64/jre | org.apache.zookeeper.ZooKeeper | main
2017-07-22 17:13:40,906 | INFO  | Client environment:java.class.path=/opt/amq/mq3//bin/activemq.jar | org.apache.zookeeper.ZooKeeper | main
2017-07-22 17:13:40,908 | INFO  | Client environment:java.library.path=/usr/local/cuda-7.5/lib64::/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib | org.apache.zookeeper.ZooKeeper | main
2017-07-22 17:13:40,909 | INFO  | Client environment:java.io.tmpdir=/opt/amq/mq3//tmp | org.apache.zookeeper.ZooKeeper | main
2017-07-22 17:13:40,911 | INFO  | Client environment:java.compiler=<NA> | org.apache.zookeeper.ZooKeeper | main
2017-07-22 17:13:40,913 | INFO  | Client environment:os.name=Linux | org.apache.zookeeper.ZooKeeper | main
2017-07-22 17:13:40,914 | INFO  | Client environment:os.arch=amd64 | org.apache.zookeeper.ZooKeeper | main
2017-07-22 17:13:40,916 | INFO  | Client environment:os.version=3.10.0-229.el7.x86_64 | org.apache.zookeeper.ZooKeeper | main
2017-07-22 17:13:40,918 | INFO  | Client environment:user.name=root | org.apache.zookeeper.ZooKeeper | main
2017-07-22 17:13:40,919 | INFO  | Client environment:user.home=/root | org.apache.zookeeper.ZooKeeper | main
2017-07-22 17:13:40,922 | INFO  | Client environment:user.dir=/opt/amq/mq3/bin | org.apache.zookeeper.ZooKeeper | main
2017-07-22 17:13:40,925 | INFO  | Initiating client connection, connectString=10.90.7.10:2181,10.90.7.10:2182,10.90.7.10:2183 sessionTimeout=2000 [email protected]bb2dc75 | org.apache.zooke
eper.ZooKeeper | main
2017-07-22 17:13:40,941 | WARN  | SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named Client was found in specified JAAS configuration file: /opt/amq/mq3//conf/login.config. Will
continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. | org.apache.zookeeper.ClientCnxn | main-SendThread(10.90.7.10:2183)
2017-07-22 17:13:40,944 | INFO  | Opening socket connection to server 10.90.7.10/10.90.7.10:2183 | org.apache.zookeeper.ClientCnxn | main-SendThread(10.90.7.10:2183)
2017-07-22 17:13:40,944 | WARN  | unprocessed event state: AuthFailed | org.apache.activemq.leveldb.replicated.groups.ZKClient | main-EventThread
2017-07-22 17:13:40,949 | INFO  | Socket connection established to 10.90.7.10/10.90.7.10:2183, initiating session | org.apache.zookeeper.ClientCnxn | main-SendThread(10.90.7.10:2183)
2017-07-22 17:13:40,956 | WARN  | Connected to an old server; r-o mode will be unavailable | org.apache.zookeeper.ClientCnxnSocket | main-SendThread(10.90.7.10:2183)
2017-07-22 17:13:40,957 | INFO  | Session establishment complete on server 10.90.7.10/10.90.7.10:2183, sessionid = 0x35d68b2a0f10004, negotiated timeout = 4000 | org.apache.zookeeper.ClientCnxn | main-SendThread(10.90.7.10:2183)
2017-07-22 17:13:41,146 | INFO  | Using the pure java LevelDB implementation. | org.apache.activemq.leveldb.LevelDBClient | ActiveMQ BrokerService[tkcss] Task-1
2017-07-22 17:13:41,157 | INFO  | Attaching to master: tcp://10.90.7.10:50942 | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | ActiveMQ BrokerService[tkcss] Task-1
2017-07-22 17:13:41,164 | INFO  | Slave started | org.apache.activemq.leveldb.replicated.MasterElector | ActiveMQ BrokerService[tkcss] Task-1
2017-07-22 17:13:41,214 | INFO  | Slave skipping download of: log/0000000000000000.log | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-3
2017-07-22 17:13:41,217 | INFO  | Slave requested: 0000000000000397.index/CURRENT | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-3
2017-07-22 17:13:41,219 | INFO  | Slave requested: 0000000000000397.index/000003.log | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-3
2017-07-22 17:13:41,220 | INFO  | Slave requested: 0000000000000397.index/MANIFEST-000002 | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-3
2017-07-22 17:13:41,225 | INFO  | Attaching... Downloaded 0.02/1.66 kb and 1/3 files | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-3
2017-07-22 17:13:41,226 | INFO  | Attaching... Downloaded 1.61/1.66 kb and 2/3 files | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-3
2017-07-22 17:13:41,228 | INFO  | Attaching... Downloaded 1.66/1.66 kb and 3/3 files | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-3
2017-07-22 17:13:41,228 | INFO  | Attached | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-3

 

最后,尝试将当前的Master的AMQ给kill掉,在amq3的日志中,会看到重新选主的日志:

2017-07-22 17:15:14,796 | WARN  | Unexpected session error: java.io.IOException: Connection reset by peer | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-2
2017-07-22 17:15:15,816 | INFO  | Using the pure java LevelDB implementation. | org.apache.activemq.leveldb.LevelDBClient | hawtdispatch-DEFAULT-1
2017-07-22 17:15:15,817 | INFO  | Attaching to master: tcp://10.90.7.10:50942 | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-1
2017-07-22 17:15:15,819 | WARN  | Unexpected session error: java.net.ConnectException: 拒绝连接 | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-1
2017-07-22 17:15:16,821 | INFO  | Using the pure java LevelDB implementation. | org.apache.activemq.leveldb.LevelDBClient | hawtdispatch-DEFAULT-1
2017-07-22 17:15:16,822 | INFO  | Attaching to master: tcp://10.90.7.10:50942 | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-1
2017-07-22 17:15:16,823 | WARN  | Unexpected session error: java.net.ConnectException: 拒绝连接 | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-1
2017-07-22 17:15:17,824 | INFO  | Using the pure java LevelDB implementation. | org.apache.activemq.leveldb.LevelDBClient | hawtdispatch-DEFAULT-1
2017-07-22 17:15:17,825 | INFO  | Attaching to master: tcp://10.90.7.10:50942 | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-1
2017-07-22 17:15:17,826 | WARN  | Unexpected session error: java.net.ConnectException: 拒绝连接 | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-1
2017-07-22 17:15:18,828 | INFO  | Using the pure java LevelDB implementation. | org.apache.activemq.leveldb.LevelDBClient | hawtdispatch-DEFAULT-1
2017-07-22 17:15:18,829 | INFO  | Attaching to master: tcp://10.90.7.10:50942 | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-1
2017-07-22 17:15:18,830 | WARN  | Unexpected session error: java.net.ConnectException: 拒绝连接 | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-1
2017-07-22 17:15:19,832 | INFO  | Using the pure java LevelDB implementation. | org.apache.activemq.leveldb.LevelDBClient | hawtdispatch-DEFAULT-1
2017-07-22 17:15:19,833 | INFO  | Attaching to master: tcp://10.90.7.10:50942 | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-1
2017-07-22 17:15:19,834 | WARN  | Unexpected session error: java.net.ConnectException: 拒绝连接 | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-1
2017-07-22 17:15:20,012 | INFO  | Slave stopped | org.apache.activemq.leveldb.replicated.MasterElector | ActiveMQ BrokerService[tkcss] Task-2
2017-07-22 17:15:20,140 | INFO  | Using the pure java LevelDB implementation. | org.apache.activemq.leveldb.LevelDBClient | ActiveMQ BrokerService[tkcss] Task-2
2017-07-22 17:15:20,141 | INFO  | Attaching to master: tcp://10.90.7.2:2896 | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | ActiveMQ BrokerService[tkcss] Task-2
2017-07-22 17:15:20,142 | INFO  | Slave started | org.apache.activemq.leveldb.replicated.MasterElector | ActiveMQ BrokerService[tkcss] Task-2
2017-07-22 17:15:20,169 | INFO  | Slave requested: 0000000000001401.index/000006.log | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-3
2017-07-22 17:15:20,170 | INFO  | Slave requested: 0000000000001401.index/000005.sst | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-3
2017-07-22 17:15:20,171 | INFO  | Slave requested: 0000000000001401.index/CURRENT | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-3
2017-07-22 17:15:20,171 | INFO  | Slave requested: 0000000000001401.index/MANIFEST-000004 | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-3
2017-07-22 17:15:20,177 | INFO  | Attaching... Downloaded 5.17/10.18 kb and 1/5 files | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-3
2017-07-22 17:15:20,178 | INFO  | Attaching... Downloaded 9.01/10.18 kb and 2/5 files | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-3
2017-07-22 17:15:20,179 | INFO  | Attaching... Downloaded 10.06/10.18 kb and 3/5 files | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-3
2017-07-22 17:15:20,180 | INFO  | Attaching... Downloaded 10.08/10.18 kb and 4/5 files | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-3
2017-07-22 17:15:20,181 | INFO  | Attaching... Downloaded 10.18/10.18 kb and 5/5 files | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-3
2017-07-22 17:15:20,181 | INFO  | Attached | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | hawtdispatch-DEFAULT-3

 

到此,说明我们的三个实例amq基于zk的集群已经配置好。

在应用程序中,brokerUrl的配置,可以写成这样:

brokerURL=failover:(tcp://10.90.7.2:61616,tcp://10.90.7.10:61616,tcp://10.90.2.102:61616)?initialReconnectDelay=1000
userName=admin
password=admin

采用系统默认的用户权限管理。配置信息在jetty.xml里面。

 

关于spring集成amq的应用,这里不讲,下一个博文再叙。

 

以上是关于[ZooKeeper之五] 使用 ZooKeeper 实现主-从模式的主要内容,如果未能解决你的问题,请参考以下文章

Zookeeper详解-API

zookeeper集群部署

Zookeeper

Kafka:从 ZooKeeper 获取代理主机

zookeeper搭建

Zookeeper