confluent-kafka-python icon indicating copy to clipboard operation
confluent-kafka-python copied to clipboard

List topics call doesn't cache topic metadata on producer instance

Open jzvelc opened this issue 1 year ago • 5 comments

Description

We have hit the same issue as described in https://github.com/confluentinc/confluent-kafka-dotnet/issues/701. First publish to a topics takes ~1s. Workaround with prefetching metadata works in golang but doesn't work in python.

producer_instance.list_topics(topic=None) doesn't cache topic metadata. Interestingly explicitly specifying topic in list_topics call e.g. producer_instance.list_topics(topic='topic-to-produce-to') works as expected and properly caches metadata which helps with first produce call. We would like to make a single list_topics call to prefetch metadata for all topics to avoid large number of requests to broker.

How to reproduce

# Example 1: Takes ~1s to produce (waiting for delivery report)
producer_instance.list_topics(topic=None)  # this call does not help
producer_instance.produce(topic='topic-to-produce-to', ...)

# Example 2: Produces instantly
producer_instance.list_topics(topic='topic-to-produce-to')
producer_instance.produce(topic='topic-to-produce-to', ...)

Delay occurs at: DEBUG: WAKEUP [rdkafka#producer-1] [thrd:app]: 127.0.0.1:29093/1: Wake-up: flushing

Checklist

Please provide the following information:

  • [x] confluent-kafka-python and librdkafka version (confluent_kafka.version() and confluent_kafka.libversion()): Tested with 2.2.0 and 2.3.0 e.g.: ('2.3.0', 33751040) ('2.3.0', 33751295)
  • [x] Apache Kafka broker version: 3.4-IV0
  • [x] Client configuration: Any e.g. {'debug': 'all', 'bootstrap.servers': '127.0.0.1:29093'}
  • [x] Operating system: OSX/Linux
  • [x] Provide logs (with "debug" : "..." as necessary in configuration):
# Example 1
2023-11-23 16:27:55,793 DEBUG: WAKEUP [rdkafka#producer-1] [thrd:app]: 127.0.0.1:29093/1: Wake-up: flushing
2023-11-23 16:27:55,793 DEBUG: WAKEUP [rdkafka#producer-1] [thrd:app]: Wake-up sent to 1 broker thread in state >= UP: flushing
2023-11-23 16:27:56,624 DEBUG: NOINFO [rdkafka#producer-1] [thrd:main]: Topic topic-to-produce-to metadata information unknown
2023-11-23 16:27:56,624 DEBUG: NOINFO [rdkafka#producer-1] [thrd:main]: Topic topic-to-produce-to partition count is zero: should refresh metadata
2023-11-23 16:27:56,624 DEBUG: METADATA [rdkafka#producer-1] [thrd:main]: Requesting metadata for 1/1 topics: refresh unavailable topics
2023-11-23 16:27:56,624 DEBUG: METADATA [rdkafka#producer-1] [thrd:main]: 127.0.0.1:29093/1: Request metadata for 1 topic(s): refresh unavailable topics
2023-11-23 16:27:56,626 DEBUG: SEND [rdkafka#producer-1] [thrd:127.0.0.1:29093/bootstrap]: 127.0.0.1:29093/1: Sent MetadataRequest (v12, 63 bytes @ 0, CorrId 5)
2023-11-23 16:27:56,646 DEBUG: RECV [rdkafka#producer-1] [thrd:127.0.0.1:29093/bootstrap]: 127.0.0.1:29093/1: Received MetadataResponse (v12, 256 bytes, CorrId 5, rtt 21.72ms)
2023-11-23 16:27:56,647 DEBUG: METADATA [rdkafka#producer-1] [thrd:main]: 127.0.0.1:29093/1: ===== Received metadata (for 1 requested topics): refresh unavailable topics =====
2023-11-23 16:27:56,647 DEBUG: METADATA [rdkafka#producer-1] [thrd:main]: 127.0.0.1:29093/1: ClusterId: OiL1NCroHTyT9nB1uBOsmg, ControllerId: 1
2023-11-23 16:27:56,647 DEBUG: METADATA [rdkafka#producer-1] [thrd:main]: 127.0.0.1:29093/1: 1 brokers, 1 topics
2023-11-23 16:27:56,647 DEBUG: METADATA [rdkafka#producer-1] [thrd:main]: 127.0.0.1:29093/1:   Broker #0/1: 127.0.0.1:29093 NodeId 1
2023-11-23 16:27:56,647 DEBUG: METADATA [rdkafka#producer-1] [thrd:main]: 127.0.0.1:29093/1:   Topic topic-to-produce-to with 6 partitions
2023-11-23 16:27:56,647 DEBUG: STATE [rdkafka#producer-1] [thrd:main]: Topic topic-to-produce-to changed state unknown -> exists
2023-11-23 16:27:56,647 DEBUG: PARTCNT [rdkafka#producer-1] [thrd:main]: Topic topic-to-produce-to partition count changed from 0 to 6
2023-11-23 16:27:56,647 DEBUG: TOPPARNEW [rdkafka#producer-1] [thrd:main]: NEW topic-to-produce-to [0] 0x108809800 refcnt 0x108809890 (at rd_kafka_topic_partition_cnt_update:912)
2023-11-23 16:27:56,647 DEBUG: TOPPARNEW [rdkafka#producer-1] [thrd:main]: NEW topic-to-produce-to [1] 0x10880a200 refcnt 0x10880a290 (at rd_kafka_topic_partition_cnt_update:912)
2023-11-23 16:27:56,648 DEBUG: TOPPARNEW [rdkafka#producer-1] [thrd:main]: NEW topic-to-produce-to [2] 0x10880a800 refcnt 0x10880a890 (at rd_kafka_topic_partition_cnt_update:912)
2023-11-23 16:27:56,648 DEBUG: TOPPARNEW [rdkafka#producer-1] [thrd:main]: NEW topic-to-produce-to [3] 0x10880ae00 refcnt 0x10880ae90 (at rd_kafka_topic_partition_cnt_update:912)
2023-11-23 16:27:56,648 DEBUG: TOPPARNEW [rdkafka#producer-1] [thrd:main]: NEW topic-to-produce-to [4] 0x10880b400 refcnt 0x10880b490 (at rd_kafka_topic_partition_cnt_update:912)
2023-11-23 16:27:56,648 DEBUG: TOPPARNEW [rdkafka#producer-1] [thrd:main]: NEW topic-to-produce-to [5] 0x10880ba00 refcnt 0x10880ba90 (at rd_kafka_topic_partition_cnt_update:912)
2023-11-23 16:27:56,648 DEBUG: METADATA [rdkafka#producer-1] [thrd:main]:   Topic topic-to-produce-to partition 0 Leader 1 Epoch 0
2023-11-23 16:27:56,648 DEBUG: BROKER [rdkafka#producer-1] [thrd:main]: topic-to-produce-to [0]: leader -1 epoch -1 -> leader 1 epoch 0
2023-11-23 16:27:56,648 DEBUG: BRKDELGT [rdkafka#producer-1] [thrd:main]: topic-to-produce-to [0]: delegate to broker 127.0.0.1:29093/1 (rktp 0x108809800, term 0, ref 3)
2023-11-23 16:27:56,648 DEBUG: BRKDELGT [rdkafka#producer-1] [thrd:main]: topic-to-produce-to [0]: delegating to broker 127.0.0.1:29093/1 for partition with 0 messages (0 bytes) queued
2023-11-23 16:27:56,648 DEBUG: BRKMIGR [rdkafka#producer-1] [thrd:main]: Migrating topic topic-to-produce-to [0] 0x108809800 from (none) to 127.0.0.1:29093/1 (sending PARTITION_JOIN to 127.0.0.1:29093/1)
2023-11-23 16:27:56,648 DEBUG: METADATA [rdkafka#producer-1] [thrd:main]:   Topic topic-to-produce-to partition 1 Leader 1 Epoch 0
2023-11-23 16:27:56,648 DEBUG: BROKER [rdkafka#producer-1] [thrd:main]: topic-to-produce-to [1]: leader -1 epoch -1 -> leader 1 epoch 0
2023-11-23 16:27:56,649 DEBUG: BRKDELGT [rdkafka#producer-1] [thrd:main]: topic-to-produce-to [1]: delegate to broker 127.0.0.1:29093/1 (rktp 0x10880a200, term 0, ref 3)
2023-11-23 16:27:56,649 DEBUG: BRKDELGT [rdkafka#producer-1] [thrd:main]: topic-to-produce-to [1]: delegating to broker 127.0.0.1:29093/1 for partition with 0 messages (0 bytes) queued
2023-11-23 16:27:56,649 DEBUG: BRKMIGR [rdkafka#producer-1] [thrd:main]: Migrating topic topic-to-produce-to [1] 0x10880a200 from (none) to 127.0.0.1:29093/1 (sending PARTITION_JOIN to 127.0.0.1:29093/1)
2023-11-23 16:27:56,649 DEBUG: METADATA [rdkafka#producer-1] [thrd:main]:   Topic topic-to-produce-to partition 2 Leader 1 Epoch 0
2023-11-23 16:27:56,649 DEBUG: BROKER [rdkafka#producer-1] [thrd:main]: topic-to-produce-to [2]: leader -1 epoch -1 -> leader 1 epoch 0
2023-11-23 16:27:56,649 DEBUG: BRKDELGT [rdkafka#producer-1] [thrd:main]: topic-to-produce-to [2]: delegate to broker 127.0.0.1:29093/1 (rktp 0x10880a800, term 0, ref 3)
2023-11-23 16:27:56,649 DEBUG: BRKDELGT [rdkafka#producer-1] [thrd:main]: topic-to-produce-to [2]: delegating to broker 127.0.0.1:29093/1 for partition with 0 messages (0 bytes) queued
2023-11-23 16:27:56,649 DEBUG: BRKMIGR [rdkafka#producer-1] [thrd:main]: Migrating topic topic-to-produce-to [2] 0x10880a800 from (none) to 127.0.0.1:29093/1 (sending PARTITION_JOIN to 127.0.0.1:29093/1)
2023-11-23 16:27:56,649 DEBUG: METADATA [rdkafka#producer-1] [thrd:main]:   Topic topic-to-produce-to partition 3 Leader 1 Epoch 0
2023-11-23 16:27:56,649 DEBUG: BROKER [rdkafka#producer-1] [thrd:main]: topic-to-produce-to [3]: leader -1 epoch -1 -> leader 1 epoch 0
2023-11-23 16:27:56,649 DEBUG: BRKDELGT [rdkafka#producer-1] [thrd:main]: topic-to-produce-to [3]: delegate to broker 127.0.0.1:29093/1 (rktp 0x10880ae00, term 0, ref 3)
2023-11-23 16:27:56,649 DEBUG: BRKDELGT [rdkafka#producer-1] [thrd:main]: topic-to-produce-to [3]: delegating to broker 127.0.0.1:29093/1 for partition with 0 messages (0 bytes) queued
2023-11-23 16:27:56,650 DEBUG: BRKMIGR [rdkafka#producer-1] [thrd:main]: Migrating topic topic-to-produce-to [3] 0x10880ae00 from (none) to 127.0.0.1:29093/1 (sending PARTITION_JOIN to 127.0.0.1:29093/1)
2023-11-23 16:27:56,650 DEBUG: METADATA [rdkafka#producer-1] [thrd:main]:   Topic topic-to-produce-to partition 4 Leader 1 Epoch 0
2023-11-23 16:27:56,650 DEBUG: BROKER [rdkafka#producer-1] [thrd:main]: topic-to-produce-to [4]: leader -1 epoch -1 -> leader 1 epoch 0
2023-11-23 16:27:56,650 DEBUG: BRKDELGT [rdkafka#producer-1] [thrd:main]: topic-to-produce-to [4]: delegate to broker 127.0.0.1:29093/1 (rktp 0x10880b400, term 0, ref 3)
2023-11-23 16:27:56,650 DEBUG: BRKDELGT [rdkafka#producer-1] [thrd:main]: topic-to-produce-to [4]: delegating to broker 127.0.0.1:29093/1 for partition with 0 messages (0 bytes) queued
2023-11-23 16:27:56,650 DEBUG: BRKMIGR [rdkafka#producer-1] [thrd:main]: Migrating topic topic-to-produce-to [4] 0x10880b400 from (none) to 127.0.0.1:29093/1 (sending PARTITION_JOIN to 127.0.0.1:29093/1)
2023-11-23 16:27:56,650 DEBUG: METADATA [rdkafka#producer-1] [thrd:main]:   Topic topic-to-produce-to partition 5 Leader 1 Epoch 0
2023-11-23 16:27:56,650 DEBUG: BROKER [rdkafka#producer-1] [thrd:main]: topic-to-produce-to [5]: leader -1 epoch -1 -> leader 1 epoch 0
2023-11-23 16:27:56,650 DEBUG: BRKDELGT [rdkafka#producer-1] [thrd:main]: topic-to-produce-to [5]: delegate to broker 127.0.0.1:29093/1 (rktp 0x10880ba00, term 0, ref 3)
2023-11-23 16:27:56,650 DEBUG: BRKDELGT [rdkafka#producer-1] [thrd:main]: topic-to-produce-to [5]: delegating to broker 127.0.0.1:29093/1 for partition with 0 messages (0 bytes) queued
2023-11-23 16:27:56,650 DEBUG: BRKMIGR [rdkafka#producer-1] [thrd:main]: Migrating topic topic-to-produce-to [5] 0x10880ba00 from (none) to 127.0.0.1:29093/1 (sending PARTITION_JOIN to 127.0.0.1:29093/1)
2023-11-23 16:27:56,651 DEBUG: PARTCNT [rdkafka#producer-1] [thrd:main]: Partitioning 1 unassigned messages in topic topic-to-produce-to to 6 partitions
2023-11-23 16:27:56,651 DEBUG: UAS [rdkafka#producer-1] [thrd:main]: 1/1 messages were partitioned in topic topic-to-produce-to
2023-11-23 16:27:56,651 DEBUG: METADATA [rdkafka#producer-1] [thrd:main]: 127.0.0.1:29093/1: 1/1 requested topic(s) seen in metadata
2023-11-23 16:27:56,651 DEBUG: TOPBRK [rdkafka#producer-1] [thrd:127.0.0.1:29093/bootstrap]: 127.0.0.1:29093/1: Topic topic-to-produce-to [0]: joining broker (rktp 0x108809800, 0 message(s) queued)
2023-11-23 16:27:56,651 DEBUG: FETCHADD [rdkafka#producer-1] [thrd:127.0.0.1:29093/bootstrap]: 127.0.0.1:29093/1: Added topic-to-produce-to [0] to active list (1 entries, opv 0, 0 messages queued): joining
2023-11-23 16:27:56,651 DEBUG: DRAIN [rdkafka#producer-1] [thrd:127.0.0.1:29093/bootstrap]: topic-to-produce-to [0] beginning partition drain: wait for outstanding requests to finish before producing to new leader
2023-11-23 16:27:56,651 DEBUG: BROADCAST [rdkafka#producer-1] [thrd:127.0.0.1:29093/bootstrap]: Broadcasting state change
2023-11-23 16:27:56,651 DEBUG: TOPBRK [rdkafka#producer-1] [thrd:127.0.0.1:29093/bootstrap]: 127.0.0.1:29093/1: Topic topic-to-produce-to [1]: joining broker (rktp 0x10880a200, 0 message(s) queued)
2023-11-23 16:27:56,651 DEBUG: FETCHADD [rdkafka#producer-1] [thrd:127.0.0.1:29093/bootstrap]: 127.0.0.1:29093/1: Added topic-to-produce-to [1] to active list (2 entries, opv 0, 0 messages queued): joining
2023-11-23 16:27:56,651 DEBUG: DRAIN [rdkafka#producer-1] [thrd:127.0.0.1:29093/bootstrap]: topic-to-produce-to [1] beginning partition drain: wait for outstanding requests to finish before producing to new leader
2023-11-23 16:27:56,651 DEBUG: BROADCAST [rdkafka#producer-1] [thrd:127.0.0.1:29093/bootstrap]: Broadcasting state change
2023-11-23 16:27:56,651 DEBUG: TOPBRK [rdkafka#producer-1] [thrd:127.0.0.1:29093/bootstrap]: 127.0.0.1:29093/1: Topic topic-to-produce-to [2]: joining broker (rktp 0x10880a800, 1 message(s) queued)
2023-11-23 16:27:56,651 DEBUG: FETCHADD [rdkafka#producer-1] [thrd:127.0.0.1:29093/bootstrap]: 127.0.0.1:29093/1: Added topic-to-produce-to [2] to active list (3 entries, opv 0, 1 messages queued): joining
2023-11-23 16:27:56,651 DEBUG: DRAIN [rdkafka#producer-1] [thrd:127.0.0.1:29093/bootstrap]: topic-to-produce-to [2] beginning partition drain: wait for outstanding requests to finish before producing to new leader
2023-11-23 16:27:56,651 DEBUG: BROADCAST [rdkafka#producer-1] [thrd:127.0.0.1:29093/bootstrap]: Broadcasting state change
2023-11-23 16:27:56,651 DEBUG: TOPBRK [rdkafka#producer-1] [thrd:127.0.0.1:29093/bootstrap]: 127.0.0.1:29093/1: Topic topic-to-produce-to [3]: joining broker (rktp 0x10880ae00, 0 message(s) queued)
2023-11-23 16:27:56,651 DEBUG: FETCHADD [rdkafka#producer-1] [thrd:127.0.0.1:29093/bootstrap]: 127.0.0.1:29093/1: Added topic-to-produce-to [3] to active list (4 entries, opv 0, 0 messages queued): joining
2023-11-23 16:27:56,651 DEBUG: DRAIN [rdkafka#producer-1] [thrd:127.0.0.1:29093/bootstrap]: topic-to-produce-to [3] beginning partition drain: wait for outstanding requests to finish before producing to new leader
2023-11-23 16:27:56,651 DEBUG: BROADCAST [rdkafka#producer-1] [thrd:127.0.0.1:29093/bootstrap]: Broadcasting state change
2023-11-23 16:27:56,651 DEBUG: TOPBRK [rdkafka#producer-1] [thrd:127.0.0.1:29093/bootstrap]: 127.0.0.1:29093/1: Topic topic-to-produce-to [4]: joining broker (rktp 0x10880b400, 0 message(s) queued)
2023-11-23 16:27:56,651 DEBUG: FETCHADD [rdkafka#producer-1] [thrd:127.0.0.1:29093/bootstrap]: 127.0.0.1:29093/1: Added topic-to-produce-to [4] to active list (5 entries, opv 0, 0 messages queued): joining
2023-11-23 16:27:56,651 DEBUG: DRAIN [rdkafka#producer-1] [thrd:127.0.0.1:29093/bootstrap]: topic-to-produce-to [4] beginning partition drain: wait for outstanding requests to finish before producing to new leader
2023-11-23 16:27:56,651 DEBUG: BROADCAST [rdkafka#producer-1] [thrd:127.0.0.1:29093/bootstrap]: Broadcasting state change
2023-11-23 16:27:56,651 DEBUG: TOPBRK [rdkafka#producer-1] [thrd:127.0.0.1:29093/bootstrap]: 127.0.0.1:29093/1: Topic topic-to-produce-to [5]: joining broker (rktp 0x10880ba00, 0 message(s) queued)
2023-11-23 16:27:56,651 DEBUG: FETCHADD [rdkafka#producer-1] [thrd:127.0.0.1:29093/bootstrap]: 127.0.0.1:29093/1: Added topic-to-produce-to [5] to active list (6 entries, opv 0, 0 messages queued): joining
2023-11-23 16:27:56,651 DEBUG: DRAIN [rdkafka#producer-1] [thrd:127.0.0.1:29093/bootstrap]: topic-to-produce-to [5] beginning partition drain: wait for outstanding requests to finish before producing to new leader
2023-11-23 16:27:56,651 DEBUG: BROADCAST [rdkafka#producer-1] [thrd:127.0.0.1:29093/bootstrap]: Broadcasting state change
2023-11-23 16:27:56,651 DEBUG: TOPPAR [rdkafka#producer-1] [thrd:127.0.0.1:29093/bootstrap]: 127.0.0.1:29093/1: topic-to-produce-to [0] all in-flight requests drained from queue
2023-11-23 16:27:56,651 DEBUG: TOPPAR [rdkafka#producer-1] [thrd:127.0.0.1:29093/bootstrap]: 127.0.0.1:29093/1: topic-to-produce-to [1] all in-flight requests drained from queue
2023-11-23 16:27:56,652 DEBUG: TOPPAR [rdkafka#producer-1] [thrd:127.0.0.1:29093/bootstrap]: 127.0.0.1:29093/1: topic-to-produce-to [2] all in-flight requests drained from queue
2023-11-23 16:27:56,652 DEBUG: TOPPAR [rdkafka#producer-1] [thrd:127.0.0.1:29093/bootstrap]: 127.0.0.1:29093/1: topic-to-produce-to [2] 1 message(s) in xmit queue (1 added from partition queue)
2023-11-23 16:27:56,652 DEBUG: NEWPID [rdkafka#producer-1] [thrd:127.0.0.1:29093/bootstrap]: topic-to-produce-to [2] changed PID{Invalid} -> PID{Id:6048,Epoch:0} with base MsgId 1
2023-11-23 16:27:56,652 DEBUG: RESETSEQ [rdkafka#producer-1] [thrd:127.0.0.1:29093/bootstrap]: topic-to-produce-to [2] resetting epoch base seq from 0 to 1
2023-11-23 16:27:56,652 DEBUG: PRODUCE [rdkafka#producer-1] [thrd:127.0.0.1:29093/bootstrap]: 127.0.0.1:29093/1: topic-to-produce-to [2]: Produce MessageSet with 1 message(s) (581 bytes, ApiVersion 7, MsgVersion 2, MsgId 1, BaseSeq 0, PID{Id:6048,Epoch:0}, snappy)
2023-11-23 16:27:56,652 DEBUG: TOPPAR [rdkafka#producer-1] [thrd:127.0.0.1:29093/bootstrap]: 127.0.0.1:29093/1: topic-to-produce-to [3] all in-flight requests drained from queue
2023-11-23 16:27:56,652 DEBUG: TOPPAR [rdkafka#producer-1] [thrd:127.0.0.1:29093/bootstrap]: 127.0.0.1:29093/1: topic-to-produce-to [4] all in-flight requests drained from queue
2023-11-23 16:27:56,652 DEBUG: TOPPAR [rdkafka#producer-1] [thrd:127.0.0.1:29093/bootstrap]: 127.0.0.1:29093/1: topic-to-produce-to [5] all in-flight requests drained from queue
2023-11-23 16:27:56,652 DEBUG: SEND [rdkafka#producer-1] [thrd:127.0.0.1:29093/bootstrap]: 127.0.0.1:29093/1: Sent ProduceRequest (v7, 647 bytes @ 0, CorrId 6)
2023-11-23 16:27:56,664 DEBUG: RECV [rdkafka#producer-1] [thrd:127.0.0.1:29093/bootstrap]: 127.0.0.1:29093/1: Received ProduceResponse (v7, 63 bytes, CorrId 6, rtt 16.47ms)
2023-11-23 16:27:56,664 DEBUG: MSGSET [rdkafka#producer-1] [thrd:127.0.0.1:29093/bootstrap]: 127.0.0.1:29093/1: topic-to-produce-to [2]: MessageSet with 1 message(s) (MsgId 1, BaseSeq 0) delivered
2023-11-23 16:27:56,664 INFO: Message users.UserRegistered[4bc47166-6487-4450-96d3-991bf4590324] delivered to topic-to-produce-to[2]@1

# Example 2
2023-11-23 16:25:54,715 DEBUG: WAKEUP [rdkafka#producer-1] [thrd:app]: 127.0.0.1:29093/1: Wake-up: flushing
2023-11-23 16:25:54,715 DEBUG: TOPPAR [rdkafka#producer-1] [thrd:127.0.0.1:29093/bootstrap]: 127.0.0.1:29093/1: topic-to-produce-to [4] 1 message(s) in xmit queue (1 added from partition queue)
2023-11-23 16:25:54,715 DEBUG: WAKEUP [rdkafka#producer-1] [thrd:app]: Wake-up sent to 1 broker thread in state >= UP: flushing
2023-11-23 16:25:54,715 DEBUG: NEWPID [rdkafka#producer-1] [thrd:127.0.0.1:29093/bootstrap]: topic-to-produce-to [4] changed PID{Invalid} -> PID{Id:6047,Epoch:0} with base MsgId 1
2023-11-23 16:25:54,715 DEBUG: RESETSEQ [rdkafka#producer-1] [thrd:127.0.0.1:29093/bootstrap]: topic-to-produce-to [4] resetting epoch base seq from 0 to 1
2023-11-23 16:25:54,715 DEBUG: PRODUCE [rdkafka#producer-1] [thrd:127.0.0.1:29093/bootstrap]: 127.0.0.1:29093/1: topic-to-produce-to [4]: Produce MessageSet with 1 message(s) (579 bytes, ApiVersion 7, MsgVersion 2, MsgId 1, BaseSeq 0, PID{Id:6047,Epoch:0}, snappy)
2023-11-23 16:25:54,716 DEBUG: SEND [rdkafka#producer-1] [thrd:127.0.0.1:29093/bootstrap]: 127.0.0.1:29093/1: Sent ProduceRequest (v7, 645 bytes @ 0, CorrId 5)
2023-11-23 16:25:54,734 DEBUG: RECV [rdkafka#producer-1] [thrd:127.0.0.1:29093/bootstrap]: 127.0.0.1:29093/1: Received ProduceResponse (v7, 63 bytes, CorrId 5, rtt 25.46ms)
2023-11-23 16:25:54,734 DEBUG: MSGSET [rdkafka#producer-1] [thrd:127.0.0.1:29093/bootstrap]: 127.0.0.1:29093/1: topic-to-produce-to [4]: MessageSet with 1 message(s) (MsgId 1, BaseSeq 0) delivered
2023-11-23 16:25:54,734 INFO: Message users.UserRegistered[4e77ce7d-ccd2-4ede-9db4-f7aa289502b3] delivered to topic-to-produce-to[4]@0
  • [x] Provide broker log excerpts: Not relevant, client side issue
  • [x] Critical issue

jzvelc avatar Nov 23 '23 15:11 jzvelc

Any news on this one? We really need some kind of a workaround for first publish latencies as latencies of 1s are totally unacceptable. Image running hundreds of processes serving http requests. Some endpoints which publish kafka messages are called less frequently and are often handled by a different proceseses. This practically means we often hit into 1s publish latency. With frequent deployments processes are restarted which makes this even worse.

jzvelc avatar Jan 15 '24 10:01 jzvelc

Any update on this issue (or workaround) we are getting hit this issue in production quite often because python wsgi processes are recreated and every time first message for each topic is really slow and negatively impacting the rest api latencies.

matejsp avatar May 30 '24 13:05 matejsp

list_topics uses metadata API of librdkafka internally, which doesn't cache topic information if requesting all topics. I understand your concern but caching all the topics is not a good option as there might be alot of topics in the cluster.

Metadata call for specific topics caches those topic information and hence it is readily available for producer to use. Can you call list_topics with specific topic before calling the produce?

pranavrth avatar Jun 06 '24 10:06 pranavrth

Calling list_topics with specific topic works as intended and caches the topic. However, I don't want to call this for every topic as there would be to many requests. I would expect calling a single list_topics(topic=None) would cache metadata for all topics the same way as it is done in golang implementation.

jzvelc avatar Jun 06 '24 11:06 jzvelc

Have you tried the latest version of golang client and it can you confirm that the latest version is caching all the topics? AFAIK it shouldn't.

pranavrth avatar Jun 10 '24 10:06 pranavrth