Batch containing 1 record(s) expired due to timeout while requesting metadata from brokers

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Batch containing 1 record(s) expired due to timeout while requesting metadata from brokers相关的知识,希望对你有一定的参考价值。

一、异常信息如下:

java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Batch containing 1 record(s) expired due to timeout while requesting metadata from brokers for user-video-0

at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:65)

at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:52)

at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:25)

at kafka.examples.Producer.run(Producer.java:43)

二、异常信息如下:

Failed to send producer request with correlation id 8 to broker 2 with data for partitions [user-video,0]

java.nio.channels.ClosedChannelException

at kafka.network.BlockingChannel.send(BlockingChannel.scala:100) ~[kafka_2.10-0.8.2.0.jar:?]

at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73) ~[kafka_2.10-0.8.2.0.jar:?]

at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72) ~[kafka_2.10-0.8.2.0.jar:?]

at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SyncProducer.scala:103) ~[kafka_2.10-0.8.2.0.jar:?]

at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(SyncProducer.scala:103) ~[kafka_2.10-0.8.2.0.jar:?]

at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(SyncProducer.scala:103) ~[kafka_2.10-0.8.2.0.jar:?]

at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33) ~[kafka_2.10-0.8.2.0.jar:?]

at kafka.producer.SyncProducer$$anonfun$send$1.apply$mcV$sp(SyncProducer.scala:102) ~[kafka_2.10-0.8.2.0.jar:?]

at kafka.producer.SyncProducer$$anonfun$send$1.apply(SyncProducer.scala:102) ~[kafka_2.10-0.8.2.0.jar:?]

at kafka.producer.SyncProducer$$anonfun$send$1.apply(SyncProducer.scala:102) ~[kafka_2.10-0.8.2.0.jar:?]

at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33) ~[kafka_2.10-0.8.2.0.jar:?]

at kafka.producer.SyncProducer.send(SyncProducer.scala:101) ~[kafka_2.10-0.8.2.0.jar:?]

at kafka.producer.async.DefaultEventHandler.kafka$producer$async$DefaultEventHandler$$send(DefaultEventHandler.scala:255) [kafka_2.10-0.8.2.0.jar:?]

at kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(DefaultEventHandler.scala:106) [kafka_2.10-0.8.2.0.jar:?]

at kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(DefaultEventHandler.scala:100) [kafka_2.10-0.8.2.0.jar:?]

at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772) [scala-library-2.10.4.jar:?]

at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98) [scala-library-2.10.4.jar:?]

at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98) [scala-library-2.10.4.jar:?]

at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226) [scala-library-2.10.4.jar:?]

at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39) [scala-library-2.10.4.jar:?]

at scala.collection.mutable.HashMap.foreach(HashMap.scala:98) [scala-library-2.10.4.jar:?]

at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771) [scala-library-2.10.4.jar:?]

at kafka.producer.async.DefaultEventHandler.dispatchSerializedData(DefaultEventHandler.scala:100) [kafka_2.10-0.8.2.0.jar:?]

at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:72) [kafka_2.10-0.8.2.0.jar:?]

at kafka.producer.Producer.send(Producer.scala:77) [kafka_2.10-0.8.2.0.jar:?]

at kafka.javaapi.producer.Producer.send(Producer.scala:33) [kafka_2.10-0.8.2.0.jar:?]

at kafka.example.kafkaProducer.run(kafkaProducer.java:24) [classes/:?]

Back off for 100 ms before retrying send. Remaining retries = 1

从这两个异常信息,都是网络问题导致的,如果在本地测试出现该异常,把代码放到集群上测试,如果集群可以看防火墙,如果还不行,在本地hosts文件中添加 主机 主机名称对应关系。

以上是关于Batch containing 1 record(s) expired due to timeout while requesting metadata from brokers的主要内容,如果未能解决你的问题,请参考以下文章

Batch containing 1 record(s) expired due to timeout while requesting metadata from brokers

Batch containing 11 record(s) expired due to timeo

easy-batch 核心概念

Expiring 1 record(s) for canalEtl_hn-0: 30017 ms has passed since batch creation plus linger time

如何在 AWS Batch 上使用 docker compose?

BCB ERROR:[Linker Error] 'XXX.LIB' contains invalid OMF record, type 0x21 (possibly COFF)(代