KAFKA日志管理

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了KAFKA日志管理相关的知识,希望对你有一定的参考价值。

kafka启动后,会产生会多日志,经常会将磁盘撑爆。所以kafka日志清理很有必要


log4j.properties

该文件为kafka日志管理的配置文件,位于$KAFKA_HOME/config/log4j.properties

默认该配置文件中日志存放路径为$KAFKA_HOME/logs,可以修改为其他容量较大的数据盘,比如我自己设置为/data/kafka/logs


注意:如果只是改了这个配置,是不生效的,还需要修改脚本$KA_FKA_HOME/bin/kafka-run-class.sh,加入以下配置

LOG_DIR="/data/kafka/logs"


log4j.properties配置

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

kafka.logs.dir=/data/kafka/logs

log4j.rootLogger=INFO, default

log4j.appender.default=org.apache.log4j.RollingFileAppender
log4j.appender.default.File=${kafka.logs.dir}/default.log
log4j.appender.default.MaxBackupIndex = 10
log4j.appender.default.MaxFileSize = 100MB
log4j.appender.default.layout=org.apache.log4j.PatternLayout
log4j.appender.default.layout.ConversionPattern=[%d] %p %m (%c)%n


log4j.appender.kafkaAppender=org.apache.log4j.RollingFileAppender
log4j.appender.kafkaAppender.File=${kafka.logs.dir}/server.log
log4j.appender.kafkaAppender.MaxBackupIndex = 10
log4j.appender.kafkaAppender.MaxFileSize = 100MB
log4j.appender.kafkaAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.kafkaAppender.layout.ConversionPattern=[%d] %p %m (%c)%n


log4j.appender.stateChangeAppender=org.apache.log4j.RollingFileAppender
log4j.appender.stateChangeAppender.File=${kafka.logs.dir}/state-change.log
log4j.appender.stateChangeAppender.MaxBackupIndex = 10
log4j.appender.stateChangeAppender.MaxFileSize = 100MB
log4j.appender.stateChangeAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.stateChangeAppender.layout.ConversionPattern=[%d] %p %m (%c)%n


log4j.appender.requestAppender=org.apache.log4j.RollingFileAppender
log4j.appender.requestAppender.File=${kafka.logs.dir}/kafka-request.log
log4j.appender.requestAppender.MaxBackupIndex = 10
log4j.appender.requestAppender.MaxFileSize = 100MB
log4j.appender.requestAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.requestAppender.layout.ConversionPattern=[%d] %p %m (%c)%n


log4j.appender.cleanerAppender=org.apache.log4j.RollingFileAppender
log4j.appender.cleanerAppender.File=${kafka.logs.dir}/log-cleaner.log
log4j.appender.cleanerAppender.MaxBackupIndex = 10
log4j.appender.cleanerAppender.MaxFileSize = 100MB
log4j.appender.cleanerAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.cleanerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n


log4j.appender.controllerAppender=org.apache.log4j.RollingFileAppender
log4j.appender.controllerAppender.File=${kafka.logs.dir}/controller.log
log4j.appender.controllerAppender.MaxBackupIndex = 10
log4j.appender.controllerAppender.MaxFileSize = 100MB
log4j.appender.controllerAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.controllerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

# Turn on all our debugging info
#log4j.logger.kafka.producer.async.DefaultEventHandler=DEBUG, kafkaAppender
#log4j.logger.kafka.client.ClientUtils=DEBUG, kafkaAppender
#log4j.logger.kafka.perf=DEBUG, kafkaAppender
#log4j.logger.kafka.perf.ProducerPerformance$ProducerThread=DEBUG, kafkaAppender
#log4j.logger.org.I0Itec.zkclient.ZkClient=DEBUG

log4j.logger.kafka=INFO, kafkaAppender
log4j.additivity.kafka=false

log4j.logger.kafka.network.RequestChannel$=WARN, requestAppender
log4j.additivity.kafka.network.RequestChannel$=false

#log4j.logger.kafka.network.Processor=TRACE, requestAppender
#log4j.logger.kafka.server.KafkaApis=TRACE, requestAppender
#log4j.additivity.kafka.server.KafkaApis=false

log4j.logger.kafka.request.logger=WARN, requestAppender
log4j.additivity.kafka.request.logger=false

log4j.logger.kafka.controller=TRACE, controllerAppender
log4j.additivity.kafka.controller=false

log4j.logger.kafka.log.LogCleaner=INFO, cleanerAppender
log4j.additivity.kafka.log.LogCleaner=false

log4j.logger.state.change.logger=TRACE, stateChangeAppender
log4j.additivity.state.change.logger=false


以上是关于KAFKA日志管理的主要内容,如果未能解决你的问题,请参考以下文章

KAFKA日志管理

kafka源码剖析之日志管理-LogManager

离线部署ELK+kafka日志管理系统

离线部署ELK+kafka日志管理系统

Storm+Kafka实现流式大数据实时日志分析

kafka管理界面 kafka-eagle