Kafka 集群部署

Posted 我只想躺平

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Kafka 集群部署相关的知识,希望对你有一定的参考价值。

一、安装前系统简单初始化

1、关闭 SELINUX 、防火墙

[root@zookeeper01 ~]# setenforce 0
[root@zookeeper01 ~]# sed -i s/enforcing/disabled/ /etc/selinux/config    

[root@zookeeper01 ~]# systemctl stop firewalld && systemctl disable firewalld
2、配置主机名、添加解析

[root@zookeeper01 ~]# hostnamectl set-hostname zookeeper01
[root@zookeeper02 ~]# hostnamectl set-hostname zookeeper02
[root@zookeeper03 ~]# hostnamectl set-hostname zookeeper03

[root@zookeeper01 ~]# cat << EOF >> /etc/hosts
> 172.16.1.11  zookeeper01  
> 172.16.1.12  zookeeper02 
> 172.16.1.13  zookeeper03
> EOF

###  同样的操作在 zookeeper02 ,zookeeper03 上执行一次
3、配置时间同步

[root@zookeeper01 ~]# \\cp -f /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
[root@zookeeper01 ~]# ntpdate ntp.aliyun.com
[root@zookeeper01 ~]# systemctl start ntpdate && systemctl enable ntpdate

二、安装配置 JDK

1、安装 JDK

[root@zookeeper01 ~]# tar -zxvf jdk-8u131-linux-x64.tar.gz -C /usr/local/
[root@zookeeper01 ~]# cd /usr/local/jdk1.8.0_131/bin

[root@zookeeper01 ~]# ./java -version
java version "1.8.0_131"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
2、配置环境变量 ( 必须 )

[root@zookeeper01 ~]# cp /etc/profile /etc/profile.bak

[root@zookeeper01 ~]# vim /etc/profile       # 文本末尾追加下面两行
export JAVA_HOME=/usr/local/jdk1.8.0_131
export PATH=.:$PATH:$JAVA_HOME/bin

[root@zookeeper01 ~]# source /etc/profile
[root@zookeeper01 ~]# java -version
java version "1.8.0_131"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)

三、安装 Zookeeper 集群


### 接上一篇博客 "Zookeeper 集群部署" 而接着进行的 Kafka 集群部署,这里就省去相关安装过程了。

四、安装部署KAFKA集群

1、上传并解压缩安装包

[root@zookeeper01 ~]# tar -zxvf kafka_2.11-2.3.0.tgz -C ../usr/local
[root@zookeeper01 ~]# mv kafka_2.11-2.3.0  kafka
[root@zookeeper01 ~]# cd /usr/local/kafka/config
2、修改配置文件
1、基本的配置文件参数

[root@zookeeper01 ~]# cp server.properties server.properties.default
[root@zookeeper01 ~]# vim server.properties

 #  这里是配配置文件最少需要的参数
broker.id=1
log.dirs=/data/kafka/logs
zookeeper.connect=172.16.1.11:2181,172.16.1.12:2181,172.16.1.13:2181

 ## zookeeper02、zookeeper03 两个节点,配置文件如下即可
 #  zookeeper02 节点
broker.id=2
log.dirs=/data/kafka/logs
zookeeper.connect=172.16.1.11:2181,172.16.1.12:2181,172.16.1.13:2181

 #  zookeeper03 节点
broker.id=3
log.dirs=/data/kafka/logs
zookeeper.connect=172.16.1.11:2181,172.16.1.12:2181,172.16.1.13:2181

2、推荐的配置文件供参考

broker.id=1
port=9092
host.name=172.16.1.11
num.network.threads=5
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/data/kafka/logs
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=3
transaction.state.log.replication.factor=3
transaction.state.log.min.isr=3
default.replication.factor=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=172.16.1.11:2181,172.16.1.12:2181,172.16.1.13:2181
zookeeper.connection.timeout.ms=60000
group.initial.rebalance.delay.ms=0
auto.create.topics.enable=true
delete.topic.enable=true

参数说明:
num.io.threads: broker处理磁盘IO的线程数,数值应该大于你的硬盘数
socket.send.buffer.bytes:     socket 的发送缓冲区
socket.receive.buffer.bytes:  socket 的接受缓冲区
socket.request.max.bytes:     socket 请求的最大数值
num.partitions: 每个topic的分区个数
num.recovery.threads.per.data.dir: 每个数据目录中的线程数
offsets.topic.replication.factor:  设置副本数量
transaction.state.log.replication.factor: 事务主题的复制因子
transaction.state.log.min.isr: 覆盖事务主题的min.insync.replicas配置
default.replication.factor: 是否允许自动创建topic
log.retention.hours:  消息的最大的持久化时间
log.segment.bytes:    每个segment的大小
log.retention.check.interval.ms: 文件大小检查的周期时间
3、将目录同步到其他节点

[root@zookeeper01 ~]# scp -r /usr/local/kafka zookeeper02:/usr/local
[root@zookeeper01 ~]# scp -r /usr/local/kafka zookeeper03:/usr/local
4、创建相对应的目录

[root@zookeeper01 ~]# mkdir -p /data/kafka/logs
[root@zookeeper01 ~]# scp -r /data/kafka zookeeper02:/data
[root@zookeeper01 ~]# scp -r /data/kafka zookeeper03:/data
5、启动KAFKA 集群

[root@zookeeper01 ~]# /usr/local/kafka/bin/kafka-server-start.sh -daemon /usr/local/kafka/config/server.properties

[root@zookeeper02 ~]# /usr/local/kafka/bin/kafka-server-start.sh -daemon /usr/local/kafka/config/server.properties

[root@zookeeper03 ~]# /usr/local/kafka/bin/kafka-server-start.sh -daemon /usr/local/kafka/config/server.properties
6、验证启动情况

[root@zookeeper01 ~]# netstat -lntp | grep 9092
tcp6       0      0 172.16.1.11:9092        :::      LISTEN      2500/java

#  或者查看kafka的进程也可以的,kafka的进程输出行数很多,下面截图的是前三四行
[root@zookeeper01 ~]# ps -ef | grep kafka 
root      2500     1  8 May24 ?        1-03:52:47 /usr/local/java/jdk1.8.0_201/bin/java -Xmx1G -Xms1G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -Djava.awt.headless=true -Xloggc:/usr/local/kafka/bin/../logs/kafkaServer-gc.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation ......

#  zookeeper02、zookeeper03 两个节点同样如此验证
7、配置KAFKA的环境变量 (非必须)

[root@zookeeper01 ~]# cat << EOF >> /etc/profile
> export KAFKA_HOME=/usr/local/kafka
> export PATH=.:$PATH:$KAFKA_HOME/bin
> EOF

[root@zookeeper01 ~]# source /etc/profile

#  所有节点都要配置,相同的操作在其他两个节点执行一遍

以上是关于Kafka 集群部署的主要内容,如果未能解决你的问题,请参考以下文章

Kafka快速入门——Kafka集群部署

kafka集群的部署及kafka监控工具

k8s部署Kafka集群

解开Kafka神秘的面纱:kafka单机部署和集群部署

解开Kafka神秘的面纱:kafka单机部署和集群部署

kafka集群部署步骤