prometheus-kafka-adapter icon indicating copy to clipboard operation
prometheus-kafka-adapter copied to clipboard

remote write size larger than 104857600

Open jerryum opened this issue 2 years ago • 3 comments

[2023-02-08 19:56:35,993] WARN [SocketServer listenerType=ZK_BROKER, nodeId=0] Unexpected error from /10.138.0.12 (channelId=10.32.2.15:9092-10.138.0.12:37806-76); closing connection (org.apache.kafka.common.network.Selector)
org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = 1347375956 larger than 104857600)
        at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:105)
        at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452)
        at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402)
        at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674)
        at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576)
        at org.apache.kafka.common.network.Selector.poll(Selector.java:481)
        at kafka.network.Processor.poll(SocketServer.scala:1055)
        at kafka.network.Processor.run(SocketServer.scala:959)
        at java.base/java.lang.Thread.run(Thread.java:829)

This is what I received when I connect prometheus-kafka-adapter to Prometheus. Modified the max receive size of Kafka lager than 1347375956 but still getting the same error. Any advise will be welcome!

jerryum avatar Feb 08 '23 19:02 jerryum

Hi @jerryum, that's a big message. I think it has nothing to do with prometheus-kafka-adapter, but with Kafka config itself. Could you try to increase the message.max.bytes in your Kafka brokers? Is that what you changed?

palmerabollo avatar Apr 16 '23 22:04 palmerabollo

yes, that's what I did... Couldn't find the solution and ... forked the the repo and modified the adapter to create two different Kafka topics by exporters to reduce the message size....Metrics for the pods are too many... separated the metrics - one topic for the pods and the other topic for the rest of metrics.

jerryum avatar Apr 16 '23 22:04 jerryum

Hi @jerryum,

I faced a similar issue with Spark writes. I believe you may need to adjust the producer properties, specifically with max.request.size. Please take a look at this resource: How to Send Large Messages in Apache Kafka.

You might need to make changes to the producer configuration in the adapter code or tweak some settings in the configuration. I'll update you once I find the necessary changes.

roshan989 avatar Jan 14 '24 14:01 roshan989