kcat icon indicating copy to clipboard operation
kcat copied to clipboard

Documentation or troubleshooting hints for error messages

Open PAX523 opened this issue 6 years ago • 7 comments

I'm trying to send very large messages (1 MB) but it doesn't succeed and aborts with an error:

% Delivery failed for message: Local: Message timed out

It's hard to locate the source of the problem. I'd be glad if you could provide a wiki/documentation which lists all possible error messages and gives hints according to possible causes for them.

I've already modified message.max.bytes and request.timeout.ms on kafkacat invocation - without success. The timeout setting wasn't effective, it still aborts after 5 minutes. Anyway, I wonder why it takes so much time?

PAX523 avatar May 03 '18 11:05 PAX523

try -d msg to enable message debugging

edenhill avatar May 03 '18 12:05 edenhill

Thanks for the quick response!

According to https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md I just enabled -X debug=broker,topic,msg. It basically logs this several times:

No more space in current MessageSet (0 message(s), 119 bytes)

The code location has following condition:

            if (unlikely(msgcnt == msetw->msetw_msgcntmax ||
                         len + rd_kafka_msg_wire_size(rkm, msetw->
                                                      msetw_MsgVersion) >
                         max_msg_size)) {
                    rd_rkb_dbg(rkb, MSG, "PRODUCE",
                               "No more space in current MessageSet "
                               "(%i message(s), %"PRIusz" bytes)",
                               msgcnt, len);
                    break;
            }

PAX523 avatar May 03 '18 12:05 PAX523

My set value of -X message.max.bytes was too small. I need to add a specific amount of overhead bytes (e. g. 300).

Thanks for your hint.

PAX523 avatar May 03 '18 12:05 PAX523

What version of librdkafka? (kafkacat -V)

edenhill avatar May 03 '18 13:05 edenhill

Version 1.3.1 (JSON) (librdkafka 0.11.4 builtin.features=gzip,snappy,ssl,sasl,regex,lz4,sasl_plain,sasl_scram,plugins)

PAX523 avatar May 03 '18 14:05 PAX523

I ran into this one recently, but for producing, with the opposite fix: reducing the max message size.

%7|1615433222.151|PRODUCE|rdkafka#producer-1| [thrd:kafka:9092/1]: kafka:9092/1: patient [0]: No more space in current MessageSet (1107 message(s), 999931 bytes)
%7|1615433222.151|PRODUCE|rdkafka#producer-1| [thrd:kafka:9092/1]: kafka:9092/1: patient [0]: Produce MessageSet with 1107 message(s) (999877 bytes, ApiVersion 7, MsgVersion 2, MsgId 0, BaseSeq -1, PID{Invalid}, uncompressed)
%7|1615433222.154|MSGSET|rdkafka#producer-1| [thrd:kafka:9092/1]: kafka:9092/1: patient [0]: MessageSet with 1128 message(s) (MsgId 0, BaseSeq -1) encountered error: Broker: Message size too large (actions Permanent,MsgNotPersisted)
% Delivery failed for message: Broker: Message size too large

In this case I needed to set the max message size in librdkafka/kafakcat to be slightly smaller than the configuration on the broker, eg:

kafka-configs --bootstrap-server localhost:9092 --describe --entity-type brokers --all  | grep message.max.bytes
...  message.max.bytes=100000  ...

So I used something like:

kafkacat -P -b localhost -t patient -J -X message.max.bytes=80000 -l patient.json

Is there some way we could configure kafkacat/librdkafka by default to prevent this?

masoncj avatar Mar 11 '21 03:03 masoncj

There's no practical way for a producer to know the broker's message.max.bytes setting, short of querying the topic config: but that's a complex change to introduce to the producer.

edenhill avatar Mar 11 '21 11:03 edenhill