librdkafka icon indicating copy to clipboard operation
librdkafka copied to clipboard

Produce failed: Local: Queue full

Open indsak opened this issue 1 year ago • 6 comments

I am trying to receive data from socket at 100MBps, each message nearly 7000 -7500 bytes, and publish these messages to Kafka topic with partition 0.

When i execute the program, after 3 min I get the error "Failed to produce to topic: Local: Queue full". How can I overcome this? I am giving below the settings which I wrote in conf. What other settings should I include?

I may be receiving data at still higher rate. Does librdkafka supports this?

Below are the conf settings I have done in librdkafka. rd_kafka_conf_set(conf, "bootstrap.servers", KAFKA_BROKER, errstr, sizeof(errstr)); rd_kafka_conf_set(conf, "queue.buffering.max.messages", "100000000", NULL, 0); rd_kafka_conf_set(conf, "queue.buffering.max.ms", "40", NULL, 0);
rd_kafka_conf_set(conf, "queue.buffering.max.kbytes", "1000000", errstr, sizeof(errstr)); rd_kafka_conf_set(conf, "message.max.bytes", "100000000", errstr, sizeof(errstr)); rd_kafka_conf_set(conf, "max.request.size", "100000000", errstr, sizeof(errstr)); rd_kafka_conf_set(conf, "compression.codec", "snappy", errstr, sizeof(errstr));

I am using 1.9.0 librdkafka for Producer. Apache Kafka version: Operating system:RHEL 7.9

server.properties is as below

broker.id=0 message.max.bytes=41943552 port:9093 num.network.threads=3 num.io.threads=8 socket.send.buffer.bytes=102400 socket.receive.buffer.bytes=102400 socket.request.max.bytes=104857600 offsets.retention.minutes=360 num.partitions=1 num.recovery.threads.per.data.dir=1 offsets.topic.replication.factor=1 transaction.state.log.replication.factor=1 transaction.state.log.min.isr=1 log.retention.minutes=3 log.segment.bytes=1073741824 log.retention.check.interval.ms=300000 delete.topic.enable=true zookeeper.connect=localhost:2181 zookeeper.connection.timeout.ms=18000 group.initial.rebalance.delay.ms=0

I have seen many posts with similar subjects. Tried out whatever I could, but still I get this error.

indsak avatar Jun 28 '24 07:06 indsak

It may be that in 40 ms you're producing more that 1MB of data, try increasing queue.buffering.max.kbytes to a value double than the size of messages produced in 40 ms.

emasab avatar Jun 28 '24 07:06 emasab

OK. Thank you. I will try this and will update

indsak avatar Jun 28 '24 08:06 indsak

If I take calculation for 7000 bytes, in 40ms time 4.2MB data will be there for publishing.

I modified the conf as follows rd_kafka_conf_set(conf, "queue.buffering.max.messages", "100000", NULL, 0); rd_kafka_conf_set(conf, "queue.buffering.max.ms", "5", NULL, 0); rd_kafka_conf_set(conf, "queue.buffering.max.kbytes", "2147483647", errstr, sizeof(errstr)); //maximum value

But still the same error. Where am I going wrong? Any help?

indsak avatar Jun 28 '24 10:06 indsak

Any help regarding my query @edenhill ?

indsak avatar Jul 01 '24 06:07 indsak

Are you calling rd_kafka_poll on producer instance to get delivery reports? Otherwise the number of enqueued messages will only increase

emasab avatar Jul 30 '24 13:07 emasab

After each call rd_kafka_producev() to produce i am calling rd_kafka_poll(rk, 0)

indsak avatar Aug 23 '24 06:08 indsak