kafka运维:kafka操作日志设置

Posted cac2020

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了kafka运维:kafka操作日志设置相关的知识,希望对你有一定的参考价值。

首先附上kafka 操作日志配置文件:log4j.properties

根据相应的需要设置日志。

#日志级别覆盖规则  优先级:ALL < DEBUG < INFO <WARN < ERROR < FATAL < OFF
#1.子日志log4j.logger会覆盖主日志log4j.rootLogger,这里设置的是日志输出级别,Threshold设置appender的日志接收级别;
#2.log4j.logger级别低于Threshold,appender接收级别取决于Threshold级别;
#3.log4j.logger级别高于Threshold,appender接收级别取决于log4j.logger级别,因为输出里就没有Threshold要求的日志;
#4.子logger设置,主要与rootLogger区分开打印日志 一般与log4j.additivity配合使用
#log4j.additivity 是否继承父Logger的输出源(appender),默认是true 
#true 在stdout, kafkaAppender里输出 也会在stateChangeAppender输出 
#这里需要单独输出 所以设置为false 只会在stateChangeAppender输出
#log4j.logger后面如果没有appender,则默认使用log4j.rootLogger后面设置的appender

#主日志设置 
log4j.rootLogger=ERROR, stdout, kafkaAppender

#控制台的appender和layout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.kafkaAppender.Append=true
log4j.appender.kafkaAppender.Threshold=ERROR 
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%n

#kafkaAppender的appender和layout
log4j.appender.kafkaAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.kafkaAppender.DatePattern=‘.‘yyyy-MM-dd-HH
log4j.appender.kafkaAppender.File=${kafka.logs.dir}/server.log
log4j.appender.kafkaAppender.Append=true
log4j.appender.kafkaAppender.Threshold=ERROR 
log4j.appender.kafkaAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.kafkaAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

#状态变化日志
log4j.logger.state.change.logger=ERROR, stateChangeAppender
log4j.additivity.state.change.logger=false
log4j.appender.stateChangeAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.stateChangeAppender.DatePattern=‘.‘yyyy-MM-dd-HH
log4j.appender.stateChangeAppender.File=${kafka.logs.dir}/state-change.log
log4j.appender.stateChangeAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.stateChangeAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

#请求处理
log4j.logger.kafka.request.logger=ERROR, requestAppender
log4j.additivity.kafka.request.logger=false
log4j.logger.kafka.network.Processor=ERROR, requestAppender
log4j.additivity.kafka.network.Processor=false
log4j.logger.kafka.server.KafkaApis=ERROR, requestAppender
log4j.additivity.kafka.server.KafkaApis=false
log4j.logger.kafka.network.RequestChannel$=ERROR, requestAppender
log4j.additivity.kafka.network.RequestChannel$=false
log4j.appender.requestAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.requestAppender.DatePattern=‘.‘yyyy-MM-dd-HH
log4j.appender.requestAppender.File=${kafka.logs.dir}/kafka-request.log
log4j.appender.requestAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.requestAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

#kafka-logs清理
log4j.logger.kafka.log.LogCleaner=ERROR, cleanerAppender
log4j.additivity.kafka.log.LogCleaner=false
log4j.appender.cleanerAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.cleanerAppender.DatePattern=‘.‘yyyy-MM-dd-HH
log4j.appender.cleanerAppender.File=${kafka.logs.dir}/log-cleaner.log
log4j.appender.cleanerAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.cleanerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

#controller
log4j.logger.kafka.controller=ERROR, controllerAppender
log4j.additivity.kafka.controller=false
log4j.appender.controllerAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.controllerAppender.DatePattern=‘.‘yyyy-MM-dd-HH
log4j.appender.controllerAppender.File=${kafka.logs.dir}/controller.log
log4j.appender.controllerAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.controllerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

#authorizer
log4j.logger.kafka.authorizer.logger=ERROR, authorizerAppender
log4j.additivity.kafka.authorizer.logger=false
log4j.appender.authorizerAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.authorizerAppender.DatePattern=‘.‘yyyy-MM-dd-HH
log4j.appender.authorizerAppender.File=${kafka.logs.dir}/kafka-authorizer.log
log4j.appender.authorizerAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.authorizerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

#ZkClient
log4j.logger.org.I0Itec.zkclient.ZkClient=ERROR
#zookeeper
log4j.logger.org.apache.zookeeper=ERROR
#kafka
log4j.logger.kafka=ERROR
#org.apache.kafka
log4j.logger.org.apache.kafka=ERROR

 

其次 kafka默认打印GC日志,如下,

[[email protected] logs]$ ls
kafka-authorizer.log          kafkaServer-gc.log.3  kafkaServer-gc.log.8      server.log.2018-10-22-14
kafka-request.log             kafkaServer-gc.log.4  kafkaServer-gc.log.9      server.log.2018-10-22-15
kafkaServer-gc.log.0          kafkaServer-gc.log.5  kafkaServer.out
kafkaServer-gc.log.1          kafkaServer-gc.log.6  server.log
kafkaServer-gc.log.2.current  kafkaServer-gc.log.7  server.log.2018-10-22-13

 

生产是不需要的   需要关掉,kafka home bin目录下面有个kafka-run-class.sh脚本  vim编辑一下

将参数 KAFKA_GC_LOG_OPTS=" " 设置为空格即可,重启kafka之后就不再打印GC日志了。

[[email protected] bin]$ vim kafka-run-class.sh

GC_FILE_SUFFIX=-gc.log
GC_LOG_FILE_NAME=‘‘
if [ "x$GC_LOG_ENABLED" = "xtrue" ]; then
  GC_LOG_FILE_NAME=$DAEMON_NAME$GC_FILE_SUFFIX
  KAFKA_GC_LOG_OPTS="-Xloggc:$LOG_DIR/$GC_LOG_FILE_NAME -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M"
KAFKA_GC_LOG_OPTS=" "
fi

 

以上是关于kafka运维:kafka操作日志设置的主要内容,如果未能解决你的问题,请参考以下文章

Kafka运维大全来了!优化监控故障处理……

《kafka问答100例 -5》什么时候真正执行删除Topic磁盘日志 ?

kafka日志保留策略异常处理

kafka管理界面 kafka-eagle

graylog+kafka+zookeeper(单机测试及源码),微服务日志查询使用(七)

kafka单条消息过大导致线上OOM,运维连夜跑路了!