Emanuele Sabellico
Emanuele Sabellico
@mensfeld we're trying even if it's difficult. We know GitHub did it for their own repos so maybe it's possible. In the meantime please re-star the clients to support them.
@mensfeld Thank you!
> update: I can now trigger segfaults. Not sure yet exactly why but at least I can crash it on my machine. @mensfeld that's great, is it possible to gather...
Thanks a lot @mensfeld ! Could reproduce it and found the cause. It's because of using the same variable `i` in one nested loop here: https://github.com/confluentinc/librdkafka/blob/6eaf89fb124c421b66b43b195879d458a3a31f86/src/rdkafka_sticky_assignor.c#L821
Not in all cases, I could reproduce it in Python even with `"client.id": str(time.time())` or with `"client.id": str(random.randint(1,1000000))`. It happens when number of potential partitions in inner loop is less...
@Quuxplusone Thanks for the report, this depends on those interceptors being freed [here](https://github.com/confluentinc/librdkafka/blob/c75eae84846b1023422b75798c41d4b6b1f8b0b7/src/rdkafka.c#L2605), that leads to a double free. Removing that line will make that destroy interceptor being called when...
It may be that in 40 ms you're producing more that 1MB of data, try increasing `queue.buffering.max.kbytes` to a value double than the size of messages produced in 40 ms.
Are you calling `rd_kafka_poll` on producer instance to get delivery reports? Otherwise the number of enqueued messages will only increase
Thanks Pranav! Please rebase this branch and it's good to be merged
Closing in favour of #5015