sarama
sarama copied to clipboard
Sarama Consumer should not check for MaxResponseSize in FetchResponse >= v3
Versions
Please specify real version numbers or git SHAs, not just "Latest" since that changes fairly regularly. Sarama Version: 35324cf48e33d8260e1c7c18854465a904ade249 Kafka Version: 1.1.1 Go Version: 1.12.0
Configuration
Not relevant
Logs
kafka: error while consuming foo/61: kafka: error decoding packet: message of length 104857857 too large or too small
Problem Description
Per Kafka KIP 74, Sarama should not check for FetchResponse size when sending FetchRequest >= V3. Kafka broker may return up to 1 RecordBatch more messages than set in FetchRequest.
Note that with this check in place, the following case is degenerate (no progress will be made) with the following configuration:
# Sarama
MaxResponseSize = 1 bytes
# Kafka
max.messages.size = 1MB
But progress should be made according to the Kafka protocol and Apache Kafka Java Client implementation.
Thanks for the reports @georgeteo
Thank you for taking the time to raise this issue. However, it has not had any activity on it in the past 90 days and will be closed in 30 days if no updates occur. Please check if the master branch has already resolved the issue since it was raised. If you believe the issue is still valid and you would like input from the maintainers then please comment to ask for it to be reviewed.
I believe this issue is still valid.
Bump! @benpbrown did this change in your fork solve the problem? If so, would you consider submitting your fix upstream?
Yes, the change in the fork fixed our problem.
I don't think I can submit the PR as-is because the fork unconditionally allows any message size.
I think what it actually needs to do is switch on the Kafka protocol version. That's why I haven't tried to upstream the change.
FetchResponse >= v3 is basically anything newer than Kafka 0.10.1, which is basically everything these days, so we probably don't need to version guard any change here.
However, it's not immediately obvious what the correct fix is. MaxResponseSize is still a useful safety net on "don't allow my client to crash because the server sent me an encoded message suggesting it was very large", but it also shouldn't really be coupled to what we send for request.MaxBytes in FetchRequest, because those are two different concepts.
I'm also not sure that PacketDecodingError and a hard failure for a response that suggests it's larger than MaxResponseSize is a good idea either.