命令消费kafka报错(id: -1 rack: null) disconnected

Posted 咖啡F

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了命令消费kafka报错(id: -1 rack: null) disconnected相关的知识,希望对你有一定的参考价值。

CDP 717环境使用kafka-console-consumer 命令消费kafka报错如下

23/03/28 09:19:07 WARN clients.NetworkClient: [Consumer clientId=consumer-console-consumer-52833-1, groupId=console-consumer-52833] Bootstrap broker xx.xx.xx.xx:9092 (id: -1 rack: null) disconnected

原因

是因为kafka开启了kerberos
具体检查可见

1、In Cloudera Manager, navigate to Kafka > Configuration.
2、Set SSL Client Authentication to none.
3、Set Inter Broker Protocol to SASL_PLAINTEXT.

解决方式

1、创建文件jaas.conf

KafkaClient 
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/var/run/cloudera-scm-agent/process/11111-kafka-KAFKA_BROKER/kafka.keytab"
principal="kafka/xxxx@XXX.XXX.COM";
;

注意:这里principal不清楚的话可以先执行kinit认证后,klist查看对应信息填入即可

2、创建文件client.properties

security.protocol=SASL_PLAINTEXT
sasl.kerberos.service.name=kafka

3、执行命令

这里的jaas文件路径参考上面1存放路径
export KAFKA_OPTS="-Djava.security.auth.login.config=/home/user/jaas.conf"

4、执行消费命令

/opt/cloudera/parcels/CDH/bin/kafka-console-consumer --topic mytpoic --bootstrap-server brokerip:9092 --consumer.config client.properties --from-beginning

参考链接
https://docs.cloudera.com/cdp-private-cloud-base/7.1.7/kafka-securing/topics/kafka-secure-kerberos-enable.html

Kafka问题 02KafkaTemplate 报错 Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 问题解决

1.报错信息

主要的报错信息:Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected

报错详情如下:

[pool-10-thread-1] INFO  o.a.k.c.p.ProducerConfig - [logAll,361] - ProducerConfig values: 
	acks = 1
	batch.size = 16384
	bootstrap.servers = [localhost:9092]
	buffer.memory = 33554432
	client.dns.lookup = use_all_dns_ips
	client.id = producer-1
	compression.type = none
	connections.max.idle.ms = 540000
	delivery.timeout.ms = 120000
	enable.idempotence = false
	interceptor.classes = []
	internal.auto.downgrade.txn.commit = true
	key.serializer = class org.apache.kafka.common.serialization.StringSerializer
	linger.ms = 0
	max.block.ms = 60000
	max.in.flight.requests.per.connection = 5
	max.request.size = 1048576
	metadata.max.age.ms = 300000
	metadata.max.idle.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
	receive.buffer.bytes = 32768
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 30000
	retries = 2147483647
	retry.backoff.ms = 100
	sasl.client.callback.handler.class = null
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.mechanism = GSSAPI
	security.protocol = PLAINTEXT
	security.providers = null
	send.buffer.bytes = 131072
	socket.connection.setup.timeout.max.ms = 127000
	socket.connection.setup.timeout.ms = 10000
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2]
	ssl.endpoint.identification.algorithm = https
	ssl.engine.factory.class = null
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.certificate.chain = null
	ssl.keystore.key = null
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLSv1.2
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.certificates = null
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	transaction.timeout.ms = 60000
	transactional.id = null
	value.serializer = class org.apache.kafka.common.serialization.StringSerializer

[pool-10-thread-1] INFO  o.a.k.c.u.AppInfoParser - [<init>,119] - Kafka version: 2.7.1
[pool-10-thread-1] INFO  o.a.k.c.u.AppInfoParser - [<init>,120] - Kafka commitId: 61dbce85d0d41457
[pool-10-thread-1] INFO  o.a.k.c.u.AppInfoParser - [<init>,121] - Kafka startTimeMs: 1659944463125
[kafka-producer-network-thread | producer-1] WARN  o.a.k.c.NetworkClient - [processDisconnection,782] - [Producer clientId=producer-1] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
[kafka-producer-network-thread | producer-1] WARN  o.a.k.c.NetworkClient - [handleServerDisconnect,1079] - [Producer clientId=producer-1] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected

配置信息:

  kafka:
    bootstrap-servers: xxx.xxx.x.xxx:9092
    producer:
      retries: 0
      batch-size: 16384
      buffer-memory: 33554432
      key-serializer: org.apache.kafka.common.serialization.StringSerializer
      value-serializer: org.apache.kafka.common.serialization.StringSerializer
      acks: all
    consumer:
      group-id: test
      auto-commit-interval: 1S
      auto-offset-reset: earliest
      enable-auto-commit: true
      key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
      value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
      session-timeout-ms: 30000
    listener:
      concurrency: 5

2.问题分析

# 配置文件里的
	acks: all
# log里的
	acks = 1

很显然是配置文件没有生效 😢 原来是一个很高阶(diji)的错误 😄 kafka的配置应该是在spring下的应该是这样:

spring:
  kafka:
    bootstrap-servers: xxx.xxx.x.xxx:9092

错误的示范(能生效就怪了):

mybatis:
  typeAliasesPackage: com.test.app
  kafka:
    bootstrap-servers: xxx.xxx.x.xxx:9092

3.总结

不少问题是由于粗心大意,yaml的格式也是很重要的,否则会出现意想不到的错误,比如配置无法生效 😀 比如下边的 found a tab character that violates indentation

fail to parseJobConfig, 
err: OmniParseFileToJobConfig 
failed: parseByBootstrap 
failed: While parsing config: yaml:
line 38: found a tab character that violates indentation

以上是关于命令消费kafka报错(id: -1 rack: null) disconnected的主要内容,如果未能解决你的问题,请参考以下文章

五 通过命令行了解 Kafka消费者组

kafka 消费者进行消费数据的各种场景的API(你值得一看)

kafka 创建消费者报错

Kafka 消费者默认组 ID

我可以设置 Kafka Stream 消费者 group.id 吗?

flink kafka消费者组ID不起作用