容器化微服务
Posted 水共禾刀
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了容器化微服务相关的知识,希望对你有一定的参考价值。
本文是<Java Rest Service实战>的容器化服务章节实验记录。使用的基础环境ubuntu 16.04 LTS,实验中的集群都在一个虚拟机上,其实质是伪集群,但对于了解搭建的基本方法已经满足基本要求了。
一、构建Zookeeper容器集群
1. 定义Dockerfile
FROM index.tenxcloud.com/docker_library/java MAINTAINER HaHa
#事先下载好zookeeper COPY zookeeper-3.4.8.tar.gz /tmp/ RUN tar -xzf /tmp/zookeeper-3.4.8.tar.gz -C /opt RUN cp /opt/zookeeper-3.4.8/conf/zoo_sample.cfg /opt/zookeeper-3.4.8/conf/zoo.cfg RUN mv /opt/zookeeper-3.4.8 /opt/zookeeper RUN rm -f /tmp/zookeeper-3.4.8.tar.gz ADD entrypoint.sh /opt/entrypoint.sh RUN chmod 777 /opt/entrypoint.sh EXPOSE 2181 2888 3888 WORKDIR /opt/zookeeper VOLUME ["/opt/zookeeper/conf","/tmp/zookeeper"] CMD ["/opt/entrypoint.sh"]
2. 编写entrypoint.sh
#!/bin/sh ZOO_CFG="/opt/zookeeper/conf/zoo.cfg" echo "server id (myid): ${SERVER_ID}" echo "${SERVER_ID}" > /tmp/zookeeper/myid echo "${APPEND_1}" >> ${ZOO_CFG} echo "${APPEND_2}" >> ${ZOO_CFG} echo "${APPEND_3}" >> ${ZOO_CFG} echo "${APPEND_4}" >> ${ZOO_CFG} echo "${APPEND_5}" >> ${ZOO_CFG} echo "${APPEND_6}" >> ${ZOO_CFG} echo "${APPEND_7}" >> ${ZOO_CFG} echo "${APPEND_8}" >> ${ZOO_CFG} echo "${APPEND_9}" >> ${ZOO_CFG} echo "${APPEND_10}" >> ${ZOO_CFG} /opt/zookeeper/bin/zkServer.sh start-foreground
3.构建镜像
//工作目录
root@ubuntu:/home/zhl/zookeeper# ll
total 21756
drwxrwxr-x 2 zhl zhl 4096 Sep 14 22:17 ./
drwxr-xr-x 25 zhl zhl 4096 Sep 14 05:09 ../
-rw-rw-r-- 1 zhl zhl 512 Sep 14 20:25 Dockerfile
-rw-r--r-- 1 root root 508 Sep 14 22:17 entrypoint.sh
-rw-rw-r-- 1 zhl zhl 22261552 Sep 14 05:29 zookeeper-3.4.8.tar.gz
//在当前目录下构建名为zk的zookeeper镜像。
root@ubuntu:/home/zhl/zookeeper#docker build -t zk .
Sending build context to Docker daemon 22.27MB
Step 1/13 : FROM index.tenxcloud.com/docker_library/java
---> 264282a59a95
Step 2/13 : MAINTAINER HaHa
---> Running in 79720c1fbb96
---> ae7eddae4e93
Removing intermediate container 79720c1fbb96
Step 3/13 : COPY zookeeper-3.4.8.tar.gz /tmp/
---> 245818bb5e48
Removing intermediate container 4f8a2919a235
Step 4/13 : RUN tar -xzf /tmp/zookeeper-3.4.8.tar.gz -C /opt
---> Running in b8302238ceb1
---> 00aea27e64e3
Removing intermediate container b8302238ceb1
Step 5/13 : RUN cp /opt/zookeeper-3.4.8/conf/zoo_sample.cfg /opt/zookeeper-3.4.8/conf/zoo.cfg
---> Running in 6278f1a9487c
---> 007e855798a5
Removing intermediate container 6278f1a9487c
Step 6/13 : RUN mv /opt/zookeeper-3.4.8 /opt/zookeeper
---> Running in 90bb30879ea4
---> 63d17dc7b863
Removing intermediate container 90bb30879ea4
Step 7/13 : RUN rm -f /tmp/zookeeper-3.4.8.tar.gz
---> Running in d7ea5b8a83f4
---> b59d11ed6bdd
Removing intermediate container d7ea5b8a83f4
Step 8/13 : ADD entrypoint.sh /opt/entrypoint.sh
---> 9576b2b0cebf
Removing intermediate container ef1bd65c7c80
Step 9/13 : RUN chmod 777 /opt/entrypoint.sh
---> Running in f9dd51fe02f6
---> 4ffadd4c1d60
Removing intermediate container f9dd51fe02f6
Step 10/13 : EXPOSE 2181 2888 3888
---> Running in e58393d692c1
---> e2c47dc22195
Removing intermediate container e58393d692c1
Step 11/13 : WORKDIR /opt/zookeeper
---> 1fac68fcb274
Removing intermediate container 851cde7114c4
Step 12/13 : VOLUME /opt/zookeeper/conf /tmp/zookeeper
---> Running in e395e1f1ef13
---> 77f2f7be2dd0
Removing intermediate container e395e1f1ef13
Step 13/13 : CMD /opt/entrypoint.sh
---> Running in 8bf6fa5a1079
---> 5c819179f3f8
Removing intermediate container 8bf6fa5a1079
Successfully built 5c819179f3f8
Successfully tagged zk:latest
4. 单主机启动3个zookeeper实例
//以守护态的形式启动zookeeper 容器 zk1\\zk2\\zk3
root@ubuntu:/home/zhl/zookeeper# docker run -d --name=zk1 --net=host -e SERVER_ID=1 -e APPEND_1=server.1=192.168.225.128:2888:3888 -e APPEND_2=server.2=192.168.225.128:2889:3889 -e APPEND_3=server.3=192.168.225.128:2890:3890 -e APPEND_4=clientPort=2181 zk c3990b9e7bdedd1fdf4e73848eb4370279d1da018a82cc767f9529d2f9f5f72b root@ubuntu:/home/zhl/zookeeper# docker run -d --name=zk2 --net=host -e SERVER_ID=2 -e APPEND_1=server.1=192.168.225.128:2888:3888 -e APPEND_2=server.2=192.168.225.128:2889:3889 -e APPEND_3=server.3=192.168.225.128:2890:3890 -e APPEND_4=clientPort=2182 zk df4c81b16c7de76d74145a06bc978959f0e11c8d4aa7d615f3a98053f8a5cd2d root@ubuntu:/home/zhl/zookeeper# docker run -d --name=zk3 --net=host -e SERVER_ID=3 -e APPEND_1=server.1=192.168.225.128:2888:3888 -e APPEND_2=server.2=192.168.225.128:2889:3889 -e APPEND_3=server.3=192.168.225.128:2890:3890 -e APPEND_4=clientPort=2183 zk bd7718052a8d8e1e9c37b326c704f92c120383ab4cbf6d5988cffc7cb13bc720
其中,192.168.225.128为主机IP地址(echo $HOST_IP)
5. 查看zookeeper运行情况
zoo.cfg内容(选择其中一个):
root@ubuntu:/# cat ./var/lib/docker/volumes/b38902183ee2684494ea1edba0c963635851660e5edc0424471a49935309d669/_data/zoo.cfg # The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. dataDir=/tmp/zookeeper # the port at which the clients will connect clientPort=2181 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #autopurge.purgeInterval=1 server.1=192.168.225.128:2888:3888 server.2=192.168.225.128:2889:3889 server.3=192.168.225.128:2890:3890 clientPort=2182
//查看所有的容器
root@ubuntu:/home/zhl/zookeeper# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c1ab885b2770 zk "/opt/entrypoint.sh" 26 seconds ago Up 26 seconds zk3
b939bfa60ea2 zk "/opt/entrypoint.sh" 58 seconds ago Up 58 seconds zk2
bd161a246c28 zk "/opt/entrypoint.sh" 2 minutes ago Up 2 minutes zk1
root@ubuntu:~# echo stat|nc 127.0.0.1 2181
Zookeeper version: 3.4.8--1, built on 02/06/2016 03:18 GMT
Clients:
/172.17.0.3:45638[1](queued=0,recved=1177,sent=1177)
/127.0.0.1:57130[0](queued=0,recved=1,sent=0)
Latency min/avg/max: 0/0/70
Received: 9114
Sent: 9121
Connections: 2
Outstanding: 0
Zxid: 0x50000005e
Mode: follower
Node count: 21
root@ubuntu:~# telnet 192.168.225.128 2183
Trying 192.168.225.128...
Connected to 192.168.225.128.
Escape character is \'^]\'.
stat
Zookeeper version: 3.4.8--1, built on 02/06/2016 03:18 GMT
Clients:
/192.168.225.128:40962[0](queued=0,recved=1,sent=0)
/172.17.0.2:43694[1](queued=0,recved=1801,sent=1801)
Latency min/avg/max: 0/0/187
Received: 18234
Sent: 18241
Connections: 2
Outstanding: 0
Zxid: 0x50000005e
Mode: follower
Node count: 21
Connection closed by foreign host.
root@ubuntu:~# jps
6888 ZooKeeperMain
27882 Jps
6955 ZooKeeperMain
或者 查看zookeeper是否已启动使用:ps -ef | grep zoo.cfg
root@ubuntu:~# ps -ef | grep zoo.cfg root 6462 6439 0 14:59 ? 00:00:36 /usr/lib/jvm/java-8-openjdk-amd64/bin/java -Dzookeeper.log.dir=. -Dzookeeper.root.logger=INFO,CONSOLE -cp /opt/zookeeper/bin/../build/classes:/opt/zookeeper/bin/../build/lib/*.jar:/opt/zookeeper/bin/../lib/slf4j-log4j12-1.6.1.jar:/opt/zookeeper/bin/../lib/slf4j-api-1.6.1.jar:/opt/zookeeper/bin/../lib/netty-3.7.0.Final.jar:/opt/zookeeper/bin/../lib/log4j-1.2.16.jar:/opt/zookeeper/bin/../lib/jline-0.9.94.jar:/opt/zookeeper/bin/../zookeeper-3.4.8.jar:/opt/zookeeper/bin/../src/java/lib/*.jar:/opt/zookeeper/bin/../conf: -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false org.apache.zookeeper.server.quorum.QuorumPeerMain /opt/zookeeper/bin/../conf/zoo.cfg root 6561 6540 0 14:59 ? 00:00:44 /usr/lib/jvm/java-8-openjdk-amd64/bin/java -Dzookeeper.log.dir=. -Dzookeeper.root.logger=INFO,CONSOLE -cp /opt/zookeeper/bin/../build/classes:/opt/zookeeper/bin/../build/lib/*.jar:/opt/zookeeper/bin/../lib/slf4j-log4j12-1.6.1.jar:/opt/zookeeper/bin/../lib/slf4j-api-1.6.1.jar:/opt/zookeeper/bin/../lib/netty-3.7.0.Final.jar:/opt/zookeeper/bin/../lib/log4j-1.2.16.jar:/opt/zookeeper/bin/../lib/jline-0.9.94.jar:/opt/zookeeper/bin/../zookeeper-3.4.8.jar:/opt/zookeeper/bin/../src/java/lib/*.jar:/opt/zookeeper/bin/../conf: -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false org.apache.zookeeper.server.quorum.QuorumPeerMain /opt/zookeeper/bin/../conf/zoo.cfg root 6670 6649 0 14:59 ? 00:00:38 /usr/lib/jvm/java-8-openjdk-amd64/bin/java -Dzookeeper.log.dir=. -Dzookeeper.root.logger=INFO,CONSOLE -cp /opt/zookeeper/bin/../build/classes:/opt/zookeeper/bin/../build/lib/*.jar:/opt/zookeeper/bin/../lib/slf4j-log4j12-1.6.1.jar:/opt/zookeeper/bin/../lib/slf4j-api-1.6.1.jar:/opt/zookeeper/bin/../lib/netty-3.7.0.Final.jar:/opt/zookeeper/bin/../lib/log4j-1.2.16.jar:/opt/zookeeper/bin/../lib/jline-0.9.94.jar:/opt/zookeeper/bin/../zookeeper-3.4.8.jar:/opt/zookeeper/bin/../src/java/lib/*.jar:/opt/zookeeper/bin/../conf: -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false org.apache.zookeeper.server.quorum.QuorumPeerMain /opt/zookeeper/bin/../conf/zoo.cfg root 28049 3295 0 23:54 pts/2 00:00:00 grep --color=auto zoo.cfg
如果遇到问题,可以通过docker logs c1ab885b2770 查看。
//jps(Java Virtual Machine Process Status Tool)是JDK 1.5提供的一个显示当前所有java进程pid的命令。
root@ubuntu:~# jps -m //-m 输出传递给main 方法的参数 27665 Jps -m 6888 ZooKeeperMain -server 192.168.225.128:2181 6955 ZooKeeperMain -server 192.168.225.128:2182
root@ubuntu:~# echo stat|nc 127.0.0.1 2182 Zookeeper version: 3.4.8--1, built on 02/06/2016 03:18 GMT Clients: /127.0.0.1:36988[0](queued=0,recved=1,sent=0) /172.17.0.4:60280[1](queued=0,recved=955,sent=955) Latency min/avg/max: 0/0/288 Received: 18211 Sent: 18212 Connections: 2 Outstanding: 0 Zxid: 0x50000005e Mode: leader Node count: 21
6. 其它可能用到的命令或小知识
1. echo stat|nc 127.0.0.1 2181 查看哪个节点被选择作为follower或者leader 2. echo ruok|nc 127.0.0.1 2181 测试是否启动了该Server,若回复imok表示已经启动。 3. echo dump| nc 127.0.0.1 2181 列出未经处理的会话和临时节点。 4. echo kill | nc 127.0.0.1 2181 关掉server 5. echo conf | nc 127.0.0.1 2181 输出相关服务配置的详细信息。 6. echo cons | nc 127.0.0.1 2181 列出所有连接到服务器的客户端的完全的连接 / 会话的详细信息。 7. echo envi |nc 127.0.0.1 2181 输出关于服务环境的详细信息(区别于 conf 命令)。 8. echo reqs | nc 127.0.0.1 2181 列出未经处理的请求。 9. echo wchs | nc 127.0.0.1 2181 列出服务器 watch 的详细信息。 10. echo wchc | nc 127.0.0.1 2181 通过 session 列出服务器 watch 的详细信息,它的输出是一个与 watch 相关的会话的列表。 11. echo wchp | nc 127.0.0.1 2181 通过路径列出服务器 watch 的详细信息。它输出一个与 session 相关的路径。
//zk集群有3个节点,为何只能查询出来2个节点
root@ubuntu:~# netstat -an | grep 2183 tcp6 0 0 :::2183 :::* LISTEN tcp6 0 0 192.168.225.128:2183 172.17.0.2:43694 ESTABLISHED unix 3 [ ] STREAM CONNECTED 21838 /run/systemd/journal/stdout unix 3 [ ] STREAM CONNECTED 21837 root@ubuntu:~# netstat -atunp Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.1.1:53 0.0.0.0:* LISTEN 1023/dnsmasq tcp 0 0 172.17.0.1:44870 172.17.0.3:9092 CLOSE_WAIT 4886/docker-proxy tcp 0 0 172.17.0.1:50278 172.17.0.2:9091 CLOSE_WAIT 4692/docker-proxy tcp 0 0 172.17.0.1:41458 172.17.0.4:9093 CLOSE_WAIT 5041/docker-proxy tcp6 0 0 :::9092 :::* LISTEN 4886/docker-proxy tcp6 0 0 :::2181 :::* LISTEN 6462/java tcp6 0 0 :::9093 :::* LISTEN 5041/docker-proxy tcp6 0 0 :::2182 :::* LISTEN 6561/java tcp6 0 0 :::2183 :::* LISTEN 6670/java tcp6 0 0 192.168.225.128:2889 :::* LISTEN 6561/java tcp6 0 0 192.168.225.128:3888 :::* LISTEN 6462/java tcp6 0 0 192.168.225.128:3889 :::* LISTEN 6561/java tcp6 0 0 192.168.225.128:3890 :::* LISTEN 6670/java tcp6 0 0 :::34035 :::* LISTEN 6462/java tcp6 0 0 :::37529 :::* LISTEN 6670/java tcp6 0 0 :::37817 :::* LISTEN 6561/java tcp6 0 0 :::9091 :::* LISTEN 4692/docker-proxy tcp6 0 0 192.168.225.128:58368 192.168.225.128:2889 ESTABLISHED 6670/java tcp6 0 0 192.168.225.128:9091 172.17.0.4:39558 FIN_WAIT2 4692/docker-proxy tcp6 0 0 192.168.225.128:42050 192.168.225.128:3888 ESTABLISHED 6670/java tcp6 1 0 192.168.225.128:49148 192.168.225.128:2181 CLOSE_WAIT 6888/java tcp6 0 0 192.168.225.128:3889 192.168.225.128:50582 ESTABLISHED 6561/java tcp6 0 0 192.168.225.128:42002 192.168.225.128:3888 ESTABLISHED 6561/java tcp6 0 0 192.168.225.128:3888 192.168.225.128:42050 ESTABLISHED 6462/java tcp6 0 0 192.168.225.128:9092 172.17.0.4:37246 FIN_WAIT2 4886/docker-proxy tcp6 0 0 192.168.225.128:58374 192.168.225.128:2889 ESTABLISHED 6462/java tcp6 0 0 192.168.225.128:2182 172.17.0.4:60280 ESTABLISHED 6561/java tcp6 0 0 192.168.225.128:3888 192.168.225.128:42002 ESTABLISHED 6462/java tcp6 1 0 192.168.225.128:59822 192.168.225.128:2182 CLOSE_WAIT 6955/java tcp6 0 0 192.168.225.128:2889 192.168.225.128:58368 ESTABLISHED 6561/java tcp6 0 0 192.168.225.128:2889 192.168.225.128:58374 ESTABLISHED 6561/java tcp6 0 0 192.168.225.128:2181 172.17.0.3:45638 ESTABLISHED 6462/java tcp6 0 0 192.168.225.128:9093 172.17.0.4:60222 FIN_WAIT2 5041/docker-proxy tcp6 0 0 192.168.225.128:2183 172.17.0.2:43694 ESTABLISHED 6670/java tcp6 0 0 192.168.225.128:50582 192.168.225.128:3889 ESTABLISHED 6670/java udp 0 0 0.0.0.0:631 0.0.0.0:* 2688/cups-browsed udp 0 0 0.0.0.0:60401 0.0.0.0:* 1023/dnsmasq udp 0 0 127.0.1.1:53 0.0.0.0:* 1023/dnsmasq udp 0 0 0.0.0.0:68 0.0.0.0:* 26791/dhclient udp 0 0 0.0.0.0:5353 0.0.0.0:* 818/avahi-daemon: r udp 0 0 0.0.0.0:36101 0.0.0.0:* 818/avahi-daemon: r udp6 0 0 :::54133 :::* 818/avahi-daemon: r udp6 0 0 :::5353 :::* 818/avahi-daemon: r root@ubuntu:~# lsof -i:2181 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME java 6462 root 34u IPv6 495094 0t0 TCP *:2181 (LISTEN) java 6462 root 40u IPv6 1615785 0t0 TCP 192.168.225.128:2181->172.17.0.3:45638 (ESTABLISHED) java 6888 root 13u IPv6 498296 0t0 TCP 192.168.225.128:49148->192.168.225.128:2181 (CLOSE_WAIT) root@ubuntu:~# lsof -i:2182 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME java 6561 root 34u IPv6 495425 0t0 TCP *:2182 (LISTEN) java 6561 root 41u IPv6 1615791 0t0 TCP 192.168.225.128:2182->172.17.0.4:60280 (ESTABLISHED) java 6955 root 13u IPv6 500012 0t0 TCP 192.168.225.128:59822->192.168.225.128:2182 (CLOSE_WAIT) root@ubuntu:~# lsof -i:2183 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME java 6670 root 34u IPv6 495813 0t0 TCP *:2183 (LISTEN) java 6670 root 39u IPv6 1615789 0t0 TCP 192.168.225.128:2183->172.17.0.2:43694 (ESTABLISHED)
常用的三个状态是:ESTABLISHED 表示正在通信,TIME_WAIT 表示主动关闭,CLOSE_WAIT 表示被动关闭常用的三个状态是:ESTABLISHED 表示正在通信,TIME_WAIT 表示主动关闭,CLOSE_WAIT 表示被动关闭
二、Kafka
1. Kafka简介:
Kafka是一种分布式的,基于发布/订阅的消息系统。主要设计目标如下: 以时间复杂度为O(1)的方式提供消息持久化能力,即使对TB级以上数据也能保证常数时间复杂度的访问性能。高吞吐率。即使在非常廉价的商用机器上也能做到单机支持每秒100K条以上消息的传输。 支持Kafka Server间的消息分区,及分布式消费,同时保证每个Partition内的消息顺序传输。同时支持离线数据处理和实时数据处理。Scale out:支持在线水平扩展。
常用Message Queue对比 RabbitMQ RabbitMQ是使用Erlang编写的一个开源的消息队列,本身支持很多的协议:AMQP,XMPP, SMTP, STOMP,也正因如此,它非常重量级,更适合于企业级的开发。同时实现了Broker构架,
这意味着消息在发送给客户端时先在中心队列排队。对路由,负载均衡或者数据持久化都有很好的支持。 Redis Redis是一个基于Key-Value对的NoSQL数据库,开发维护很活跃。虽然它是一个Key-Value数据库存储系统,但它本身支持MQ功能,所以完全可以当做一个轻量级的队列服务来使用。
对于RabbitMQ和Redis的入队和出队操作,各执行100万次,每10万次记录一次执行时间。测试数据分为128Bytes、512Bytes、1K和10K四个不同大小的数据。实验表明:入队时,
当数据比较小时Redis的性能要高于RabbitMQ,而如果数据大小超过了10K,Redis则慢的无法忍受;出队时,无论数据大小,Redis都表现出非常好的性能,而RabbitMQ的出队性能则远低于Redis。 ZeroMQ ZeroMQ号称最快的消息队列系统,尤其针对大吞吐量的需求场景。ZeroMQ能够实现RabbitMQ不擅长的高级/复杂的队列,但是开发人员需要自己组合多种技术框架,
技术上的复杂度是对这MQ能够应用成功的挑战。ZeroMQ具有一个独特的非中间件的模式,你不需要安装和运行一个消息服务器或中间件,因为你的应用程序将扮演这个服务器角色。
你只需要简单的引用ZeroMQ程序库,可以使用NuGet安装,然后你就可以愉快的在应用程序之间发送消息了。但是ZeroMQ仅提供非持久性的队列,也就是说如果宕机,数据将会丢失。
其中,Twitter的Storm 0.9.0以前的版本中默认使用ZeroMQ作为数据流的传输(Storm从0.9版本开始同时支持ZeroMQ和Netty作为传输模块)。 ActiveMQ ActiveMQ是Apache下的一个子项目。 类似于ZeroMQ,它能够以代理人和点对点的技术实现队列。同时类似于RabbitMQ,它少量代码就可以高效地实现高级应用场景。 Kafka/Jafka Kafka是Apache下的一个子项目,是一个高性能跨语言分布式发布/订阅消息队列系统,而Jafka是在Kafka之上孵化而来的,即Kafka的一个升级版。
具有以下特性:快速持久化,可以在O(1)的系统开销下进行消息持久化;高吞吐,在一台普通的服务器上既可以达到10W/s的吞吐速率;
完全的分布式系统,Broker、Producer、Consumer都原生自动支持分布式,自动实现负载均衡;支持Hadoop数据并行加载,对于像Hadoop的一样的日志数据和离线分析系统,
但又要求实时处理的限制,这是一个可行的解决方案。Kafka通过Hadoop的并行加载机制统一了在线和离线的消息处理。Apache Kafka相对于ActiveMQ是一个非常轻量级的消息系统,
除了性能非常好之外,还是一个工作良好的分布式系统。
2. kafka架构
Terminology Broker Kafka集群包含一个或多个服务器,这种服务器被称为broker Topic 每条发布到Kafka集群的消息都有一个类别,这个类别被称为Topic。(物理上不同Topic的消息分开存储,逻辑上一个Topic的消息虽然保存于一个或多个broker上但用户只需指定消息的Topic即可生产或
消费数据而不必关心数据存于何处) Partition Parition是物理上的概念,每个Topic包含一个或多个Partition. Producer 负责发布消息到Kafka broker Consumer 消息消费者,向Kafka broker读取消息的客户端。 Consumer Group 每个Consumer属于一个特定的Consumer Group(可为每个Consumer指定group name,若不指定group name则属于默认的group)。
一个典型的Kafka集群中包含若干Producer(可以是web前端产生的Page View,或者是服务器日志,系统CPU、Memory等),若干broker(Kafka支持水平扩展,一般broker数量越多,集群吞吐率越高),
若干Consumer Group,以及一个Zookeeper集群。Kafka通过Zookeeper管理集群配置,选举leader,以及在Consumer Group发生变化时进行rebalance。Producer使用push模式将消息发布到
broker,Consumer使用pull模式从broker订阅并消费消息。
3.kafka镜像制作与构建容器集群
(1). 编写Dockerfile文件
root@ubuntu:/home/zhl/kafka# vi Dockerfile FROM index.tenxcloud.com/docker_library/java MAINTAINER HaHa COPY kafka_2.10-0.9.0.1.tgz /tmp/ RUN tar -xzf /tmp/kafka_2.10-0.9.0.1.tgz -C /opt RUN mv /opt/kafka_2.10-0.9.0.1 /opt/kafka RUN rm -f /tmp/zookeeper-3.4.8.tar.gz ENV KAFKA_HOME /opt/kafka ADD start-kafka.sh /usr/bin/start-kafka.sh RUN chmod 777 /usr/bin/start-kafka.sh CMD /usr/bin/start-kafka.sh
(2). 编写容器启动脚本
root@ubuntu:/home/zhl/kafka# vi start-kafka.sh cp $KAFKA_HOME/config/server.properties $KAFKA_HOME/config/server.properties.bk sed -r -i "s/(zookeeper.connect)=(.*)/\\1=${ZK}/g" $KAFKA_HOME/config/server.properties sed -r -i "s/(broker.id)=(.*)/\\1=${BROKER_ID}/g" $KAFKA_HOME/config/server.properties sed -r -i "s/(log.dirs)=(.*)/\\1=\\/tmp\\/kafka-logs-${BROKER_ID}/g" $KAFKA_HOME/config/server.properties sed -r -i "s/#(advertised.host.name)=(.*)/\\1=${HOST_IP}/g" $KAFKA_HOME/config/server.properties sed -r -i "s/#(port)=(.*)/\\1=${PORT}/g" $KAFKA_HOME/config/server.properties sed -r -i "s/(listeners)=(.*)/\\1=PLAINTEXT:\\/\\/:${PORT}/g" $KAFKA_HOME/config/ser以上是关于容器化微服务的主要内容,如果未能解决你的问题,请参考以下文章