Kafka - which is larger than the maximum request size you have configured with the max.request

Posted 放羊的牧码

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Kafka - which is larger than the maximum request size you have configured with the max.request相关的知识,希望对你有一定的参考价值。

报错问题

ERROR org.springframework.kafka.support.LoggingProducerListener | Exception thrown when sending a message with key='null' and payload='123, 34, 97, 112, 105, 82, 101, 115, 112, 111, 110, 115, 101, 34, 58, 123, 34, 109, 115, 103, 34, 5...' to topic resp-persist-topic:
org.apache.kafka.common.errors.RecordTooLargeException: The message is 1784072 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration.
org.springframework.messaging.MessageHandlingException: error occurred in message handler [org.springframework.cloud.stream.binder.kafka.KafkaMessageChannelBinder$ProducerConfigurationMessageHandler@52afb38]; nested exception is org.springframework.kafka.KafkaException: Send failed; nested exception is org.apache.kafka.common.errors.RecordTooLargeException: The message is 1784072 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration.
  • 通俗的话来说,生产者发送端有大小限制,超过了这个限制而已(默认的发送一条消息的大小是 1M,如果不配置,当发送的消息大于 1M 时就会报错)

解决方案

  • 限制调大
# server.properties
#(5M)
message.max.bytes=5242880 
#(6M)每个分区试图获取的消息字节数。要大于等于message.max.bytes
replica.fetch.max.bytes=6291456


# producer.properties
#(5M)请求的最大大小为字节。要小于 message.max.bytes
max.request.size = 5242880


# consumer.properties
# (6M)每个提取请求中为每个主题分区提取的消息字节数。要大于等于message.max.bytes
fetch.message.max.bytes=6291456

Ps:记得重启 Kafka Server 才能生效~ 

  • 分批发送

这里只讲解思路,具体情况需要看具体业务分析,大概就是自己给发送的数据做切割,比如一个 10M 的数据,不改配置的情况下(默认1M),每次最多发送 1M,那么理论上需要发送 10 次;那么,如何切割也是个学问!有些人会说,如果有数组,那根据数组范围切割,或者说根据一堆对象为一组,以此类推,其实这样都不太好,数据解析复杂度直接上升!!!

这里我推荐一种简单粗暴的方式,其实直接转换成 String,然后截取切切切即可,DDDD~ 其实细想,消费者那边始种要等数据完整了才进行业务处理,所以消费者端维护成本是一样的,那么就对比生产者端看下哪种方案维护成本低即可,我个人认为是先转字符串,然后切割是最快的方式~ 当然这里也会出现其他的情况,比如说中途宕机了咋办,数据还未完整,这种我们这里暂时不讨论,因为属于另一个业务情景要解决的技术方案了!

以上是关于Kafka - which is larger than the maximum request size you have configured with the max.request的主要内容,如果未能解决你的问题,请参考以下文章

can't find which disk is full

FlinkFLink SQL TableException: Table sink doesn‘t support consuming update changes which is

org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = 218762506 larger th

org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = 218762506 larger th

A linked list is given such that each node contains an additional random pointer which could point t

Can’t update table ‘xxx’ in stored function/trigger because it is already used by statement which in