kafka icon indicating copy to clipboard operation
kafka copied to clipboard

Fetching data with default MaxFetchSize produces an error

Open andrewslotin opened this issue 6 years ago • 1 comments

https://github.com/optiopay/kafka/commit/63a3d15f added a sanity check for response size limiting it to 640 KB. This effectively disallows to use greater values for (kafka.ConsumerConf).MaxFetchSize which is set to ~2 MB by default.

As a result fetching more than 640 KB of data from a topic results into unreasonable message/block size error, which is misleading, because this is exactly the amount of data that has been explicitly requested.

Putting a hard limit on the message/block size that does not take into consideration the value of MaxFetchSize looks a bit too strict. A stream parser might be a better solution here.

andrewslotin avatar Apr 03 '18 15:04 andrewslotin

I am seeing the same error (unreasonable message/block size 655616 (max:655350)) for a similar reason: in my case, the "message" sizes are all small (<= 100b), but are batched together. Since, iiuc, a batch is just a wire-proto message that embeds other messages, it's not surprising that the batch itself exceeds this sanity check size.

extemporalgenome avatar Apr 04 '18 15:04 extemporalgenome