kafka参数配置
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了kafka参数配置相关的知识,希望对你有一定的参考价值。
参考技术A配置日志落在哪些磁盘
配置使用哪个zookeeper
注意 ,只需要在最后面追加一个/chroot即可
具体配置格式 协议名称,主机名称, 端口号 写法 protocol://hostname:port
这个是broker的全局配置,也可以在创建topic的时候 指定每个topic的配置,默认topic的配置覆盖broker的配置。
kafka使用的scale编写的,最终是通过jvm运行,所以需要设置jvm参数对kafka调优。kafka启动的时候会读取两个环境变量
kafka并不需要设置太多的OS参数,通常需要关注下面几个:
kafka配置参数详解
参考技术Akafka的配置分为 broker、producter、consumer三个不同的配置
一 BROKER 的全局配置
最为核心的三个配置 broker.id、log.dir、zookeeper.connect 。
------------------------------------------- 系统 相关 -------------------------------------------
broker.id =1
log.dirs = /tmp/kafka-logs
port =6667
message.max.bytes =1000000
num.network.threads =3
num.io.threads =8
background.threads =4
queued.max.requests =500
host.name
advertised.host.name
advertised.port
socket.send.buffer.bytes =100*1024
socket.receive.buffer.bytes =100*1024
socket.request.max.bytes =100 1024 1024
------------------------------------------- LOG 相关 -------------------------------------------
log.segment.bytes =1024 1024 1024
log.roll.hours =24*7
log.cleanup.policy = delete
log.retention.minutes=7days
指定日志每隔多久检查看是否可以被删除,默认1分钟
log.cleanup.interval.mins=1
log.retention.bytes=-1
log.retention.check.interval.ms=5minutes
log.cleaner.enable=false
log.cleaner.threads =1
log.cleaner.io.max.bytes.per.second=None
log.cleaner.dedupe.buffer.size=500 1024 1024
log.cleaner.io.buffer.size=512*1024
log.cleaner.io.buffer.load.factor =0.9
log.cleaner.backoff.ms =15000
log.cleaner.min.cleanable.ratio=0.5
log.cleaner.delete.retention.ms =1day
log.index.size.max.bytes =10 1024 1024
log.index.interval.bytes =4096
log.flush.interval.messages=None
log.flush.scheduler.interval.ms =3000
log.flush.interval.ms = None
log.delete.delay.ms =60000
log.flush.offset.checkpoint.interval.ms =60000
------------------------------------------- TOPIC 相关 -------------------------------------------
auto.create.topics.enable =true
default.replication.factor =1
num.partitions =1
实例 --replication-factor3--partitions1--topic replicated-topic :名称replicated-topic有一个分区,分区被复制到三个broker上。
----------------------------------复制(Leader、replicas) 相关 ----------------------------------
controller.socket.timeout.ms =30000
controller.message.queue.size=10
replica.lag.time.max.ms =10000
replica.lag.max.messages =4000
replica.socket.timeout.ms=30*1000
replica.socket.receive.buffer.bytes=64*1024
replica.fetch.max.bytes =1024*1024
replica.fetch.wait.max.ms =500
replica.fetch.min.bytes =1
num.replica.fetchers=1
replica.high.watermark.checkpoint.interval.ms =5000
controlled.shutdown.enable =false
controlled.shutdown.max.retries =3
controlled.shutdown.retry.backoff.ms =5000
auto.leader.rebalance.enable =false
leader.imbalance.per.broker.percentage =10
leader.imbalance.check.interval.seconds =300
offset.metadata.max.bytes
----------------------------------ZooKeeper 相关----------------------------------
zookeeper.connect = localhost:2181
zookeeper.session.timeout.ms=6000
zookeeper.connection.timeout.ms =6000
zookeeper.sync.time.ms =2000
配置的修改
其中一部分配置是可以被每个topic自身的配置所代替,例如
新增配置
bin/kafka-topics.sh --zookeeper localhost:2181--create --topic my-topic --partitions1--replication-factor1--config max.message.bytes=64000--config flush.messages=1
修改配置
bin/kafka-topics.sh --zookeeper localhost:2181--alter --topic my-topic --config max.message.bytes=128000
删除配置 :
bin/kafka-topics.sh --zookeeper localhost:2181--alter --topic my-topic --deleteConfig max.message.bytes
二 CONSUMER 配置
最为核心的配置是group.id、zookeeper.connect
group.id
consumer.id
client.id = group id value
zookeeper.connect=localhost:2182
zookeeper.session.timeout.ms =6000
zookeeper.connection.timeout.ms =6000
zookeeper.sync.time.ms =2000
auto.offset.reset = largest
socket.timeout.ms=30*1000
socket.receive.buffer.bytes=64*1024
fetch.message.max.bytes =1024*1024
auto.commit.enable =true
auto.commit.interval.ms =60*1000
queued.max.message.chunks =10
rebalance.max.retries =4
rebalance.backoff.ms =2000
refresh.leader.backoff.ms
fetch.min.bytes =1
fetch.wait.max.ms =100
consumer.timeout.ms = -1
三 PRODUCER 的配置
比较核心的配置:metadata.broker.list、request.required.acks、producer.type、serializer.class
metadata.broker.list
request.required.acks =0
request.timeout.ms =10000
send.buffer.bytes=100*1024
key.serializer.class
partitioner.class=kafka.producer.DefaultPartitioner
compression.codec = none
compressed.topics=null
message.send.max.retries =3
retry.backoff.ms =100
topic.metadata.refresh.interval.ms =600*1000
client.id=""
------------------------------------------- 消息模式 相关 -------------------------------------------
producer.type=sync
queue.buffering.max.ms =5000
queue.buffering.max.messages =10000
queue.enqueue.timeout.ms = -1
batch.num.messages=200
serializer.class= kafka.serializer.DefaultEncoder
以上是关于kafka参数配置的主要内容,如果未能解决你的问题,请参考以下文章