kafka+zookeeper部署!

Posted handsomeboy-东

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了kafka+zookeeper部署!相关的知识,希望对你有一定的参考价值。

基本环境部署

  • 设备准备:
节点		ip地址					部署服务
node1		192.168.118.44			jdk,kafka,zookeeper
node2		192.168.118.55			jdk,kafka,zookeeper
node3		192.168.118.66			jdk,kafka,zookeeper
  • 环境准备,三台同时部署
##关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
setenforce 0

##设置时间同步
ntpdate ntp1.aliyun.com

##创建用户
useradd appUser

部署jdk环境

  • 三个节点同部署
解压jdk包
tar xf jdk1.8.0_221.tar.gz -C /usr/local/

##设置环境变量
vim /etc/profile

export JAVA_HOME=/usr/local/jdk1.8.0_221
export JR_HOME=$JAVA_HOME/jre
export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib
export PATH=$JAVA_HOME/bin:$PATH

source /etc/profile

java -version			##查看版本

##创建对应目录
mkdir -p  /export/packages/kafka,zookeeper,java  
mkdir -p  /export/data/kafka,zookeeper
mkdir -p  /export/Logs/kafkalog,zkdatalog  
mkdir -p /export/servers

部署zookeeper

  • 三个节点同时部署
##解压zookeeper包
tar xf zookeeper-3.4.13.tar -C /export/packages/zookeeper/

##备份、修改配置文件
cd /export/packages/zookeeper/zookeeper-3.4.13/conf
cp zoo_sample.cfg zoo.cfg

vim zoo.cfg
tickTime=2000  		 #通信心跳时间,Zookeeper服务器与客户端心跳时间,单位毫秒
initLimit=10   		 #Leader和Follower初始连接时能容忍的最多心跳数(tickTime的数量),这里表示为10*2s
syncLimit=5     	 #Leader和Follower之间同步通信的超时时间,这里表示如果超过5*2s,Leader认为Follwer死掉,并从服务器列表中删除Follwer
dataDir=/export/packages/zookeeper/zookeeper-3.4.13/data      ●修改,指定保存Zookeeper中的数据的目录,目录需要单独创建
dataLogDir=/export/packages/zookeeper/zookeeper-3.4.13/logs  ●添加,指定存放日志的目录,目录需要单独创建
clientPort=2181      #客户端连接端口
#添加集群信息
server.1=192.168.226.128:3188:3288
server.2=192.168.226.130:3188:3288
server.3=192.168.226.131:3188:3288

#在"每个"节点的dataDir指定的目录下创建一个 myid 的文件
echo 1 > /export/packages/zookeeper/zookeeper-3.4.13/data/myid		##node1节点
echo 2 > /export/packages/zookeeper/zookeeper-3.4.13/data/myid		##node2节点
echo 3 > /export/packages/zookeeper/zookeeper-3.4.13/data/myid		##node3节点

##配置zookeeper启动脚本
vim /etc/init.d/zookeeper
#!/bin/bash
#chkconfig:2345 20 90
#description:Zookeeper Service Control Script
ZK_HOME='/export/packages/zookeeper/zookeeper-3.4.13/'
case $1 in
start)
	echo "---------- zookeeper 启动 ------------"
	$ZK_HOME/bin/zkServer.sh start
;;
stop)
	echo "---------- zookeeper 停止 ------------"
	$ZK_HOME/bin/zkServer.sh stop
;;
restart)
	echo "---------- zookeeper 重启 ------------"
	$ZK_HOME/bin/zkServer.sh restart
;;
status)
	echo "---------- zookeeper 状态 ------------"
	$ZK_HOME/bin/zkServer.sh status
;;
*)
    echo "Usage: $0 start|stop|restart|status"
esac

##设置开机启动
chmod +x /etc/init.d/zookeeper
chkconfig --add zookeeper

service zookeeper start		##启动zookeeper

service zookeeper status	##查看zookeeper集群状态
---------- zookeeper 状态 ------------
ZooKeeper JMX enabled by default
Using config: /export/packages/zookeeper/zookeeper-3.4.13/bin/../conf/zoo.cfg
Mode: follower
[root@node1 export]# netstat -antp | grep 2181
tcp6       0      0 :::2181                 :::*                    LISTEN      89681/java          
tcp6       0      0 127.0.0.1:37636         127.0.0.1:2181          TIME_WAIT   -                   
tcp6       0      0 192.168.118.44:48274    192.168.118.66:2181     ESTABLISHED 90562/java 

kafka部署

  • 三个节点同时部署
tar xf kafka_2.11-2.3.0.tar -C /export/packages/kafka/
cd /export/servers/kafka/config/
cp -r server.properties server.properties.back
vim /export/servers/kafka/config/server.properties		##修改配置文件

node1节点:
21 broker.id=0									   #broker的全局唯一编号,每个节点不能重复
31 listeners=PLAINTEXT://192.168.118.44:9092	   #指定监听的IP和端口,如果修改每个broker的IP需区分开
60 log.dirs=/export/Logs/kafkalog 				   #kafka运行日志存放的路径,也是数据存放的路径
123 zookeeper.connect=192.168.118.44:2181,192.168.18.55:2181,192.168.118.66:2181	配置连接Zookeeper集群地址

node2节点:
broker.id=1							
listeners=PLAINTEXT://192.168.118.55:9092					
log.dirs=/export/Logs/kafkalog 					
zookeeper.connect=192.168.118.44:2181,192.168.18.55:2181,192.168.118.66:2181

node3节点:
broker.id=2							
listeners=PLAINTEXT://192.168.118.66:9092					
log.dirs=/export/Logs/kafkalog 					
zookeeper.connect=192.168.118.44:2181,192.168.18.55:2181,192.168.118.66:2181	
  • 修改文件属主属组,三节点同时部署
chown -R appUser.appUser /export/servers/kafka/
chown -R appUser.appUser /export/packages/kafka/
chown -R appUser.appUser /export/data/kafka/
chown -R appUser.appUser /export/Logs/kafkalog/
  • 设置kafka启动脚本,三节点同时部署
vim /etc/init.d/kafka 
#!/bin/bash
#chkconfig:2345 22 88
#description:Kafka Service Control Script
KAFKA_HOME='/export/servers/kafka/'
case $1 in
start)
	echo "---------- Kafka 启动 ------------"
	$KAFKA_HOME/bin/kafka-server-start.sh -daemon $KAFKA_HOME/config/server.properties
;;
stop)
	echo "---------- Kafka 停止 ------------"
	$KAFKA_HOME/bin/kafka-server-stop.sh
;;
restart)
	$0 stop
	$0 start
;;
status)
	echo "---------- Kafka 状态 ------------"
	count=$(ps -ef | grep kafka | egrep -cv "grep|$$")
	if [ "$count" -eq 0 ];then
        echo "kafka is not running"
    else
        echo "kafka is running"
    fi
;;
*)
    echo "Usage: $0 start|stop|restart|status"
esac

##设置自启
chmod +x /etc/init.d/kafka
chkconfig --add kafka

service kafka start				##启动kafka
[root@node1 export]# service kafka status
---------- Kafka 状态 ------------
kafka is running
[root@node1 export]# netstat -antp | grep 9092
tcp6       0      0 192.168.118.44:9092     :::*                    LISTEN      90562/java          
tcp6       0      0 192.168.118.44:34968    192.168.118.66:9092     ESTABLISHED 90562/java    
  • 进入用户配置,这里再node1节点(三节点都可)
[appUser@node1 export]$ /export/servers/kafka/bin/kafka-topics.sh --create --zookeeper 192.168.118.44:2181 --replication-factor 2 --partitions 1 --topic whd
Created topic whd.
解释:
  replication-factor 2:复制两份
  partitions 1:创建一个分区
  topic :主题
#创建一个broker分别在两台机器上,在一台服务器上创建一个发布者

##查看topic
[appUser@node1 export]$ /export/servers/kafka/bin/kafka-topics.sh --list --zookeeper 192.168.118.44:2181
test
whd

##查看topic状态
[appUser@node1 export]$  /export/servers/kafka/bin/kafka-topics.sh --describe --zookeeper 192.168.118.44:2181 --topic test
Topic:test	PartitionCount:1	ReplicationFactor:2	Configs:
	Topic: test	Partition: 0	Leader: 2	Replicas: 2,0	Isr: 2,0

##调优实践,生产环境适当调大jvm内存,设置环境变量
export KAFKA_HEAP_OPTS="-Xms4g -Xmx4g"

以上是关于kafka+zookeeper部署!的主要内容,如果未能解决你的问题,请参考以下文章

zookeeper集群+kafka集群 部署

Kafka集群部署

kafka+zookeeper部署!

kafka+zookeeper部署!

kafka+zookeeper部署!

kafka+zookeeper部署!