Kafka Consumer输出过多的DEBUG语句
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Kafka Consumer输出过多的DEBUG语句相关的知识,希望对你有一定的参考价值。
我遇到了一些问题,这些问题与我在K8s集群中运行的服务中生成的日志数量有关。
问题类似于here所描述的,但我无法解决问题。我的项目使用Akka和Log4j2,我不知道如何修复,也遵循上一篇文章中报告的建议。
这里是关于log4j2和关于application.conf的配置,用于akka。
<Configuration status="WARN">
<Appenders>
<Console name="Console" target="SYSTEM_OUT">
<PatternLayout pattern="%d{DEFAULT} [%t] %-5level %logger{1}.%method - %msg%n"/>
</Console>
</Appenders>
<Loggers>
<Root level="info">
<AppenderRef ref="Console"/>
</Root>
</Loggers>
</Configuration>
相反,Akka是:
akka {
# Options: OFF, ERROR, WARNING, INFO, DEBUG
loglevel = "ERROR"
# Log level for the very basic logger activated during ActorSystem startup.
# This logger prints the log messages to stdout (System.out).
# Options: OFF, ERROR, WARNING, INFO, DEBUG
stdout-loglevel = "ERROR"
# Log the complete configuration at INFO level when the actor system is started.
# This is useful when you are uncertain of what configuration is used.
log-config-on-start = off
# Properties for akka.kafka.ConsumerSettings can be
# defined in this section or a configuration section with
# the same layout.
kafka.consumer {
# Tuning property of scheduled polls.
poll-interval = 500ms
# Tuning property of the `KafkaConsumer.poll` parameter.
# Note that non-zero value means that the thread that
# is executing the stage will be blocked.
poll-timeout = 500ms
# The stage will await outstanding offset commit requests before
# shutting down, but if that takes longer than this timeout it will
# stop forcefully.
stop-timeout = 30s
# How long to wait for `KafkaConsumer.close`
close-timeout = 20s
# If offset commit requests are not completed within this timeout
# the returned Future is completed `CommitTimeoutException`.
commit-timeout = 15s
# If commits take longer than this time a warning is logged
commit-time-warning = 1s
# If for any reason `KafkaConsumer.poll` blocks for longer than the configured
# poll-timeout then it is forcefully woken up with `KafkaConsumer.wakeup`.
# The KafkaConsumerActor will throw
# `org.apache.kafka.common.errors.WakeupException` which will be ignored
# until `max-wakeups` limit gets exceeded.
wakeup-timeout = 6s
# After exceeding maxinum wakeups the consumer will stop and the stage and fail.
# Setting it to 0 will let it ignore the wakeups and try to get the polling done forever.
max-wakeups = 10
# If set to a finite duration, the consumer will re-send the last committed offsets periodically
# for all assigned partitions. See https://issues.apache.org/jira/browse/KAFKA-4682.
commit-refresh-interval = infinite
# If enabled, log stack traces before waking up the KafkaConsumer to give
# some indication why the KafkaConsumer is not honouring the `poll-timeout`
wakeup-debug = true
# Fully qualified config path which holds the dispatcher configuration
# to be used by the KafkaConsumerActor. Some blocking may occur.
#use-dispatcher = "akka.kafka.default-dispatcher"
# Time to wait for pending requests when a partition is closed
wait-close-partition = 500ms
}
}
但我总是看到以下日志:
17:00:25.346 [ActorSystem-akka.kafka.default-dispatcher-19] DEBUG o.a.k.clients.consumer.KafkaConsumer - Resuming partition Licenses-1
17:00:25.346 [ActorSystem-akka.kafka.default-dispatcher-19] DEBUG o.a.k.clients.consumer.KafkaConsumer - Resuming partition Licenses-0
17:00:25.346 [ActorSystem-akka.kafka.default-dispatcher-19] DEBUG o.a.k.clients.consumer.KafkaConsumer - Resuming partition Licenses-2
17:00:25.346 [ActorSystem-akka.kafka.default-dispatcher-19] DEBUG o.a.k.c.consumer.internals.Fetcher - Sending fetch for partitions [Licenses-0] to broker eric-data-message-bus-kf-0.eric-data-message-bus-kf.default:9092 (id: 0 rack: null)
17:00:25.346 [ActorSystem-akka.kafka.default-dispatcher-19] DEBUG o.a.k.c.consumer.internals.Fetcher - Sending fetch for partitions [Licenses-1] to broker eric-data-message-bus-kf-1.eric-data-message-bus-kf.default:9092 (id: 1 rack: null)
17:00:25.346 [ActorSystem-akka.kafka.default-dispatcher-19] DEBUG o.a.k.c.consumer.internals.Fetcher - Sending fetch for partitions [Licenses-2] to broker eric-data-message-bus-kf-2.eric-data-message-bus-kf.default:9092 (id: 2 rack: null)
17:00:25.427 [kafka-coordinator-heartbeat-thread | GroupIDTest] DEBUG o.a.k.c.c.i.AbstractCoordinator - Sending Heartbeat request for group GroupIDTest to coordinator eric-data-message-bus-kf-0.eric-data-message-bus-kf.default:9092 (id: 2147483647 rack: null)
17:00:25.428 [ActorSystem-akka.kafka.default-dispatcher-19] DEBUG o.a.k.c.c.i.AbstractCoordinator - Received successful Heartbeat response for group GroupIDTest
17:00:26.365 [ActorSystem-akka.kafka.default-dispatcher-20] DEBUG o.a.k.clients.consumer.KafkaConsumer - Resuming partition Licenses-1
17:00:26.365 [ActorSystem-akka.kafka.default-dispatcher-20] DEBUG o.a.k.clients.consumer.KafkaConsumer - Resuming partition Licenses-0
17:00:26.365 [ActorSystem-akka.kafka.default-dispatcher-20] DEBUG o.a.k.clients.consumer.KafkaConsumer - Resuming partition Licenses-2
17:00:26.365 [ActorSystem-akka.kafka.default-dispatcher-20] DEBUG o.a.k.c.consumer.internals.Fetcher - Sending fetch for partitions [Licenses-0] to broker eric-data-message-bus-kf-0.eric-data-message-bus-kf.default:9092 (id: 0 rack: null)
17:00:26.365 [ActorSystem-akka.kafka.default-dispatcher-20] DEBUG o.a.k.c.consumer.internals.Fetcher - Sending fetch for partitions [Licenses-1] to broker eric-data-message-bus-kf-1.eric-data-message-bus-kf.default:9092 (id: 1 rack: null)
17:00:26.365 [ActorSystem-akka.kafka.default-dispatcher-20] DEBUG o.a.k.c.consumer.internals.Fetcher - Sending fetch for partitions [Licenses-2] to broker eric-data-message-bus-kf-2.eric-data-message-bus-kf.default:9092 (id: 2 rack: null)
有什么建议?
答案
实际上Kafka库内部使用了slf4j-log4j12,它在内部使用log4j作为底层日志框架。
所以你需要从你的pom或sbt文件中将其从kafka_2.10 / kafka_2.11和kafka-client / zookeeper工件中排除,如果提到或者项目pom / sbt文件中的任何其他位置,并将slf4j-log4j12依赖关系明确地放在pom中或者sbt并将你的log4j.xml放在src / main / resources文件夹中,并将level作为信息,你将摆脱所有的调试语句。
pom.xml中的示例:
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
<version>1.7.5</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<version>1.0.0</version>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
</exclusions>
</dependency>
在build.sbt中:
exclude("org.slf4j","slf4j-log4j12)
到每个libraryDepencies行。
以上是关于Kafka Consumer输出过多的DEBUG语句的主要内容,如果未能解决你的问题,请参考以下文章
Consumer.endOffsets 如何在 Kafka 中工作?