Consider removing upper bound check in KafkaRequestDecoder#readAcks
I wonder if we should drop the version check here and rely on the unit tests to check we can extract acks successfully. The test already runs through all known api versions.
Originally posted by @robobario in https://github.com/kroxylicious/kroxylicious/pull/1356#discussion_r1696130197
The Produce RPC is special because unlike other Kafka RPCs, the response is only used in certain situations. Kroxylicious needs to know whether a response is expected in order to perform request/response bookkeeping.
As Produce (and Fetch) RPCs tend to dominate Kafka conversations, effort was put in to avoid the deserialization of those RPCs. The code uses byte position knowledge of the structure of the Produce RPC to read the acks value. This optimization was intended to benefit use-cases where the produce requests didn't need to be filtered, so they could fly through the proxy without incurring the deserialization cost.
The intent of the version checks in io.kroxylicious.proxy.internal.codec.KafkaRequestDecoder#readAcks were intended to make us review any future changes to the Produce RPC to ensure that the optimizations were still safe.
KIP-951 altered the picture as it changed the Produce (and Fetch) RPCs so they carried broker address information (which needs to be rewritten). So now the (internal) BrokerAddressFilter causes all Produce RPC version > 10 to be deserialised anyway. So the optimization is now only effective for clients using the older RPCs.
io.kroxylicious.proxy.internal.codec.KafkaRequestDecoder#readAcks isn't aware of the fact BrokerAddressFilter causes the new Produce RPCs to be deserialized. The upshot - in production, the condition branch !decodeRequest is actually only used older Produce RPC versions anyway.
If we decide to make the version checks open-ended, we should probably leave a comment pointing the reader at BrokerAddressFilter, so they get the whole picture.