confluent-kafka-python
confluent-kafka-python copied to clipboard
Producing large messages failed even all configuration is set
Description
Failed to produce large message. Running kafka docker compose with pre-created topics. Output of init topics:
Creating kafka topics
WARNING: Due to limitations in metric names, topics with a period ('.') or underscore ('_') could collide. To avoid issues it is best to use either, but not both.
Created topic body_events.
WARNING: Due to limitations in metric names, topics with a period ('.') or underscore ('_') could collide. To avoid issues it is best to use either, but not both.
Created topic body_tracks.
Successfully created the following topics:
_confluent-command
body_events
body_tracks
Dynamic configs for topic body_events are:
max.message.bytes=20971520 sensitive=false synonyms={DYNAMIC_TOPIC_CONFIG:max.message.bytes=20971520, DEFAULT_CONFIG:message.max.bytes=1048588}
segment.index.bytes=41943040 sensitive=false synonyms={DYNAMIC_TOPIC_CONFIG:segment.index.bytes=41943040, DEFAULT_CONFIG:log.index.size.max.bytes=10485760}
Dynamic configs for topic body_tracks are:
max.message.bytes=20971520 sensitive=false synonyms={DYNAMIC_TOPIC_CONFIG:max.message.bytes=20971520, DEFAULT_CONFIG:message.max.bytes=1048588}
segment.index.bytes=41943040 sensitive=false synonyms={DYNAMIC_TOPIC_CONFIG:segment.index.bytes=41943040, DEFAULT_CONFIG:log.index.size.max.bytes=10485760}
Created Kafka producer:
self._kafka_cfg = {
'bootstrap.servers': f'{host}:9092',
'socket.send.buffer.bytes': 20971520,
'message.max.bytes': 20971520,
'fetch.message.max.bytes': 20971520
}
self._producer = Producer(self._kafka_cfg)
But once we are trying to send large messages we get the following errors:
KafkaError{code=MSG_SIZE_TOO_LARGE,val=10,str="Broker: Message size too large"}...
while the size of the message is about 10M (for example, 1135704 B)
How to reproduce
we can provide docker-compose, and producer... If somethng missing from the required information - please say
Checklist
Please provide the following information:
- [x] confluent-kafka-python and librdkafka version (
('1.9.0', 17367040and('1.9.0', 17367295)): - [x] Apache Kafka broker version: 7.1.1
- [ ] Client configuration:
{...} - [ ] Operating system:Ubuntu 20
- [ ] Provide client logs (with
'debug': '..'as necessary) - [ ] Provide broker log excerpts
- [ ] Critical issue
Solution: I still think this is a bug (but looks like on the JAVA kafka level) as the solution is to define following env in docker-compose:
KAFKA_SOCKET_REQUEST_MAX_BYTES: 20971520
KAFKA_MESSAGE_MAX_BYTES: 20971520
it's not a bug. the message.max.bytes config value on the client must be smaller than that on the server. note that in kafka version 0.11 and later, this refers to the produced batch size, not individual size. refer to https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md