librdkafka
librdkafka copied to clipboard
Fetch 951 prework v13+
Tests didn't run because the branch isn't starting with "dev_" or "feature/". While running locally it gave buffer underflow and disconnections from the broker because of parsing failures.
RDUT: INFO: rdkafka_msg.c:2383: unittest_msgq_insert_each_sort: Done: took 8853us, 0.1020us/msg
%1|1705058293.261|PROTOUFLOW|MOCK#producer-25| [thrd:mock]: mock:0/internal: Protocol read buffer underflow for Fetch v11 at 50/133 (rd_kafka_mock_handle_Fetch:221): expected 27489 bytes > 83 remaining bytes (incorrect broker.version.fallback?)
%1|1705058293.351|PROTOUFLOW|MOCK#producer-25| [thrd:mock]: mock:0/internal: Protocol read buffer underflow for Fetch v11 at 50/161 (rd_kafka_mock_handle_Fetch:221): expected 27489 bytes > 111 remaining bytes (incorrect broker.version.fallback?)
%6|1705059537.775|FAIL|0030_offset_commit#consumer-78| [thrd:localhost:58043/3]: localhost:58043/3: Disconnected: verify that security.protocol is correctly configured, broker might require SASL authentication (after 1ms in state UP)
%6|1705059537.775|FAIL|0030_offset_commit#consumer-78| [thrd:localhost:56931/4]: localhost:56931/4: Disconnected: verify that security.protocol is correctly configured, broker might require SASL authentication (after 1ms in state UP)
[0030_offset_commit / 4.158s] TEST FAILURE
### Test "0030_offset_commit (do_empty_commit:414)" failed at test.c:678:test_error_cb() at Fri Jan 12 12:38:57 2024: ###
0030_offset_commit#consumer-78 rdkafka error: Local: Broker transport failure: localhost:58043/3: Disconnected: verify that security.protocol is correctly configured, broker might require SASL authentication (after 1ms in state UP)
%4|1705059537.775|FAIL|0030_offset_commit#consumer-78| [thrd:localhost:40271/2]: localhost:40271/2: Disconnected: verify that security.protocol is correctly configured, broker might require SASL authentication (after 1ms in state UP)
Caused by: java.lang.RuntimeException: Tried to allocate a collection of size 149446248, but there are only 48 bytes remaining.
at org.apache.kafka.common.message.FetchRequestData.read(FetchRequestData.java:270)
at org.apache.kafka.common.message.FetchRequestData.<init>(FetchRequestData.java:202)
at org.apache.kafka.common.requests.FetchRequest.parse(FetchRequest.java:429)
at org.apache.kafka.common.requests.AbstractRequest.doParseRequest(AbstractRequest.java:173)
at org.apache.kafka.common.requests.AbstractRequest.parseRequest(AbstractRequest.java:165)
at org.apache.kafka.common.requests.RequestContext.parseRequest(RequestContext.java:95)
at kafka.network.RequestChannel$Request.<init>(RequestChannel.scala:101)
at kafka.network.Processor.$anonfun$processCompletedReceives$1(SocketServer.scala:1096)
at java.base/java.util.LinkedHashMap$LinkedValues.forEach(LinkedHashMap.java:833)
at kafka.network.Processor.processCompletedReceives(SocketServer.scala:1074)
at kafka.network.Processor.run(SocketServer.scala:960)
at java.base/java.lang.Thread.run(Thread.java:1583)
Please address these comments, rename the branch to make the pipeline run and it'll pass. These in particular are the fixes:
https://github.com/confluentinc/librdkafka/pull/4552#discussion_r1450000969 https://github.com/confluentinc/librdkafka/pull/4552#discussion_r1450026448 https://github.com/confluentinc/librdkafka/pull/4552#discussion_r1450029211 https://github.com/confluentinc/librdkafka/pull/4552#discussion_r1450032959