kafka
kafka copied to clipboard
KAFKA-14063: Prevent malicious tiny payloads from causing OOMs with variably sized collections
When parsing code receives a payload for a variable length field where the length is specified in the code as some arbitrarily large number (assume INT32_MAX for example) this will immediately try to allocate an ArrayList to hold this many elements, before checking whether this is a reasonable array size given the available data.
The fix for this is to instead throw a runtime exception if the length of a variably sized container exceeds the amount of remaining data. Then, the worst a user can do is force the server to allocate 8x the size of the actual delivered data (if they claim there are N elements for a container of Objects (i.e. not a byte string) and each Object bottoms out in an 8 byte pointer in the ArrayList's backing array).
This was identified by fuzzing the kafka request parsing code.
Hello! This looks like a very interesting find. Could you write a test which tests this?
I would personally also be interested to learn what tools you used to fuzz the code? Have you tried fuzzing other parts of Kafka? If yes, these seem like very good entry level JIRA issues which can be documented for new joiners - have you created such JIRA issues?
The vulnerability has been addressed by the following commits.