Failed to send data to Kafka: Expiring 89 record(s) ...30005 ms has passed since last append

Posted 二十六画生的博客

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Failed to send data to Kafka: Expiring 89 record(s) ...30005 ms has passed since last append相关的知识,希望对你有一定的参考价值。

2021-06-08 02:05:37 java.lang.Exception: Failed to send data to Kafka: Expiring 89 record(s) for XXXXXXXXXXXXXX: 30005 ms has passed since last append

at org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase.checkErroneous(FlinkKafkaProducerBase.java:400)

............

谷歌后,查询到:解释比较好!

消息在producer端发送时,是先保存在中间缓存中,组成批,如果批等了超过默认时间限制30s后,还没发送出去就会报这个异常,同时该批的消息都会从发送队列中删除掉(丢失)!

https://stackoverflow.com/questions/46750420/kafka-producer-error-expiring-10-records-for-topicxxxxxx-6686-ms-has-passed

This exception is occuring because you are queueing records at a much faster rate than they can be sent.【插入消息的速度远远超过发送消息的速度】

When you call the send method, the ProducerRecord will be stored in an internal buffer for sending to the broker. The method returns immediately once the ProducerRecord has been buffered, regardless of whether it has been sent.

Records are grouped into batches for sending to the broker, to reduce the transport overheard per message and increase throughput.

Once a record is added into a batch, there is a time limit for sending that batch to ensure that it has been sent within a specified duration. This is controlled by the Producer configuration parameter, request.timeout.ms, which defaults to 30 seconds.

If the batch has been queued longer than the timeout limit, the exception will be thrown. Records in that batch will be removed from the send queue.

Producer configs block.on.buffer.full, metadata.fetch.timeout.ms and timeout.ms have been removed. They were initially deprecated in Kafka 0.9.0.0.

Therefore give a try for increasing request.timeout.ms。

翻译后:

出现此异常是因为您正在以比发送记录快得多的速度排队记录。

当您调用send方法时,ProducerRecord将存储在内部缓冲区中,以发送给代理。一旦ProducerRecord被缓冲,不管它是否被发送,该方法都会立即返回。

记录被分组成批发送到代理,以减少每条消息的传输开销并提高吞吐量。

将记录添加到批中后,发送该批有一个时间限制,以确保在指定的持续时间内发送该记录。这由Producer配置参数request.timeout.ms控制,默认为30秒。

如果批处理的队列超过超时限制,则将引发异常。该批中的记录将从发送队列中删除。

已删除生产者配置的block.on.buffer.full、metadata.fetch.timeout.ms和timeout.ms。它们最初在卡夫卡0.9.0.0中被弃用。

因此,尝试增加request.timeout.ms

 

以上是关于Failed to send data to Kafka: Expiring 89 record(s) ...30005 ms has passed since last append的主要内容,如果未能解决你的问题,请参考以下文章

Try to write a script to send e-mail but failed

开发环境解决 kafka Failed to send messages after 3 tries

FlinkFlink LocalTransportException Sending the partition request to ‘null‘ failed

配置redis集群报错Failed to send CLUSTER MEET command.

配置redis集群报错Failed to send CLUSTER MEET command.

配置redis集群报错Failed to send CLUSTER MEET command.