Kafka docker SASL_SSL samples are not working
Name and Version
bitnami/kafka:3.5.0
What architecture are you using?
arm64
What steps will reproduce the bug?
- Follow the documentation here to create a SASL_SSL enabled Kafka in docker: https://github.com/bitnami/containers/tree/main/bitnami/kafka#how-to-use-this-image:~:text=The%20following%20docker%2Dcompose%20file,use%20the%20credentials%20you%27ve%20provided. After generating certs and keys, the whole thing can be copy pasted into docker-compose.yaml:
version: '2'
services:
kafka:
image: 'bitnami/kafka:latest'
hostname: kafka.example.com
ports:
- '9092'
environment:
- KAFKA_CFG_LISTENERS=SASL_SSL://:9092,CONTROLLER://:9093
- KAFKA_CFG_ADVERTISED_LISTENERS=SASL_SSL://:9092
- KAFKA_CLIENT_USERS=user
- KAFKA_CLIENT_PASSWORDS=password
- KAFKA_CERTIFICATE_PASSWORD=certificatePassword123
- KAFKA_TLS_TYPE=JKS # or PEM
volumes:
# Both .jks and .pem files are supported
# - './kafka.keystore.pem:/opt/bitnami/kafka/config/certs/kafka.keystore.pem:ro'
# - './kafka.keystore.key:/opt/bitnami/kafka/config/certs/kafka.keystore.key:ro'
# - './kafka.truststore.pem:/opt/bitnami/kafka/config/certs/kafka.truststore.pem:ro'
- './kafka.keystore.jks:/opt/bitnami/kafka/config/certs/kafka.keystore.jks:ro'
- './kafka.truststore.jks:/opt/bitnami/kafka/config/certs/kafka.truststore.jks:ro'
- Try to start the container with
docker compose up
What is the expected behavior?
The expectation is that the Kafka instance starts properly
What do you see instead?
The container fails to start with error code 1.
Additional information
Adding BITNAMI_DEBUG=true to the environment variables provides a bit more info, so this is the troubleshooting I've managed so far.
This appears to be the initial error:
Exception in thread "main" java.lang.IllegalArgumentException: requirement failed: inter.broker.listener.name must be a listener name defined in advertised.listeners. The valid options based on currently configured listeners are SASL_SSL
That can be fixed by adding environment variable:
- KAFKA_CFG_INTER_BROKER_LISTENER_NAME=SASL_SSL
Now try to start the container again and you'll get a new error:
Exception in thread "main" java.lang.IllegalArgumentException: requirement failed: sasl.mechanism.inter.broker.protocol must be included in sasl.enabled.mechanisms when SASL is used for inter-broker communication
That can be fixed by adding this line:
- KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL=SCRAM-SHA-256
Now try to start the container again and you'll get this error: java.lang.SecurityException: java.io.IOException: /opt/bitnami/kafka/config/kafka_jaas.conf (No such file or directory) That can be fixed by doing the following:
- create a file called kafka_jaas.conf with the following content:
sasl_ssl.KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="myuser"
password="mypassword";
};
- Mount it in docker-compose.yaml (last line added):
volumes:
# Both .jks and .pem files are supported
# - './kafka.keystore.pem:/opt/bitnami/kafka/config/certs/kafka.keystore.pem:ro'
# - './kafka.keystore.key:/opt/bitnami/kafka/config/certs/kafka.keystore.key:ro'
# - './kafka.truststore.pem:/opt/bitnami/kafka/config/certs/kafka.truststore.pem:ro'
- './kafka.keystore-1.jks:/opt/bitnami/kafka/config/certs/kafka.keystore.jks:ro'
- './kafka.truststore-1.jks:/opt/bitnami/kafka/config/certs/kafka.truststore.jks:ro'
- ./kafka_jaas.conf:/opt/bitnami/kafka/config/kafka_jaas.conf
Now try to start the container again and you'll get this error: bitnamitest-kafka-1 | [2023-07-19 13:37:02,024] ERROR [BrokerServer id=1] Fatal error during broker startup. Prepare to shutdown (kafka.server.BrokerServer) org.apache.kafka.common.KafkaException: org.apache.kafka.common.config.ConfigException: Invalid value javax.net.ssl.SSLHandshakeException: No available authentication scheme for configuration A client SSLEngine created with the provided settings can't connect to a server SSLEngine created with those settings.
This is as far as I managed to get. It would be great if the docs could be updated so the samples work.
I reported the missing kafka_jaas.conf issue separately in #41457
Hi @Rablet
Thanks for reporting, there is definitely something wrong happening on our side, and I have the feeling that it might be related to the changes in order to support Kafka Kraft. I will create an internal task to investigate what is wrong and update the instructions in the README as well.
In the meantime, you can use the following docker-compose.yml file (which uses some other syntax) for it to work:
version: '2'
services:
kafka:
image: 'bitnami/kafka:latest'
hostname: kafka.example.com
ports:
- '9092'
environment:
- - KAFKA_CFG_LISTENERS=SASL_SSL://:9092,CONTROLLER://:9093
+ - KAFKA_CFG_LISTENERS=CLIENT://:9092,CONTROLLER://:9093
+ - KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CLIENT:SASL_SSL,CONTROLLER:PLAINTEXT
+ - KAFKA_INTER_BROKER_LISTENER_NAME=CLIENT
+ - KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL=PLAIN
+ - KAFKA_CFG_SASL_ENABLED_MECHANISMS=PLAIN
- - KAFKA_CFG_ADVERTISED_LISTENERS=SASL_SSL://:9092
+ - KAFKA_CFG_ADVERTISED_LISTENERS=CLIENT://kafka.example.com:9092
- KAFKA_CLIENT_USERS=user
- KAFKA_CLIENT_PASSWORDS=password
- KAFKA_CERTIFICATE_PASSWORD=certificatePassword123
- KAFKA_TLS_TYPE=JKS # or PEM
volumes:
# Both .jks and .pem files are supported
# - './kafka.keystore.pem:/opt/bitnami/kafka/config/certs/kafka.keystore.pem:ro'
# - './kafka.keystore.key:/opt/bitnami/kafka/config/certs/kafka.keystore.key:ro'
# - './kafka.truststore.pem:/opt/bitnami/kafka/config/certs/kafka.truststore.pem:ro'
- './kafka.keystore.jks:/opt/bitnami/kafka/config/certs/kafka.keystore.jks:ro'
- './kafka.truststore.jks:/opt/bitnami/kafka/config/certs/kafka.truststore.jks:ro'
Could you please check if this works for you?
⚠️ This is irrelevant and outdated, see my next comment.
hidden
Hi all, I was debugging the same issue today. I was able to get it running with a combination of things from this issue.
I used a slightly modified docker-compose based @joancafom 's answer
// docker-compose
version: '3.8'
services:
kafka:
image: 'bitnami/kafka:latest'
ports:
- '9092:9092'
environment:
- KAFKA_CFG_LISTENERS=CLIENT://:9092,CONTROLLER://:9093
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=SASL_SSL:SASL_SSL,CONTROLLER:SASL_SSL
- KAFKA_INTER_BROKER_LISTENER_NAME=CLIENT
- KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL=SCRAM-SHA-512
- KAFKA_CFG_SASL_ENABLED_MECHANISMS=SCRAM-SHA-512
- KAFKA_CFG_ADVERTISED_LISTENERS=CLIENT://localhost:9092
- BITNAMI_DEBUG=true
- KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE=true
volumes:
- './keystores/kafka.keystore.jks:/opt/bitnami/kafka/config/certs/kafka.keystore.jks:ro'
- './keystores/kafka.truststore.jks:/opt/bitnami/kafka/config/certs/kafka.truststore.jks:ro'
- './keystores/server.properties:/bitnami/kafka/config/server.properties'
You'll also need a server properties file
// server.properties
sasl.enabled.mechanisms=SCRAM-SHA-512
sasl.mechanism.inter.broker.protocol=TLS
This will allow the image to boot, however I still cannot connect to it from a local application.
The application dials the cluster, which generate the following response:
kafka-1 | [2023-07-24 16:19:03,252] WARN [SocketServer listenerType=BROKER, nodeId=1] Unexpected error from /172.30.0.1 (channelId=172.30.0.2:9092-172.30.0.1:52918-1); closing connection (org.apache.kafka.common.network.Selector)
kafka-1 | org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = 369295617 larger than 104857600)
kafka-1 | at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:94)
kafka-1 | at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452)
kafka-1 | at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402)
kafka-1 | at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674)
kafka-1 | at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576)
kafka-1 | at org.apache.kafka.common.network.Selector.poll(Selector.java:481)
kafka-1 | at kafka.network.Processor.poll(SocketServer.scala:1107)
kafka-1 | at kafka.network.Processor.run(SocketServer.scala:1011)
kafka-1 | at java.base/java.lang.Thread.run(Thread.java:833)
And on the client side:
{"level":"ERROR","timestamp":"2023-07-24T16:28:27.937Z","logger":"kafkajs","message":"[Connection] Connection error: Client network socket disconnected before secure TLS connection was established","broker":"localhost:9092","clientId":"kafkajs","stack":"Error: Client network socket disconnected before secure TLS connection was established\n at connResetException (node:internal/errors:717:14)\n at TLSSocket.onConnectEnd (node:_tls_wrap:1595:19)\n at TLSSocket.emit (node:events:525:35)\n at endReadableNT (node:internal/streams/readable:1359:12)\n at processTicksAndRejections (node:internal/process/task_queues:82:21)"}
Some googling suggests that encrypted traffic is hitting the cluster, when it isn't expecting it to be encrypted (or vice-versa). My application is configured to use SASL/SCRAM with a SCRAM-SHA-512 mechanism. This application can connect to existing, remote clusters, but not this local image.
Is there something simple I've misconfigured?
I came back to this with fresh eyes - my comment above was a cache issue, as it was still initializing a cluster without SASL_SSL authentication.
I got the image to boot with SASL_SSL authentication this morning, but I'm unsure whether the SASL username/password is getting set. Zookeeper was a required dependency, as the image would not start without it. I do not need a jaas config or server properties anymore.
My docker-compose file:
version: '3.8'
services:
zookeeper:
image: docker.io/bitnami/zookeeper:3.8
ports:
- '2181:2181'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
image: 'bitnami/kafka:latest'
hostname: kafka.integration.com
ports:
- '9092:9092'
environment:
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_INTER_BROKER_LISTENER_NAME=CLIENT
- KAFKA_CFG_LISTENERS=CLIENT://:9092,CONTROLLER://:9093
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CLIENT:SASL_SSL,CONTROLLER:PLAINTEXT
- KAFKA_CFG_ADVERTISED_LISTENERS=CLIENT://kafka.integration.com:9092
- KAFKA_CFG_SASL_ENABLED_MECHANISMS=PLAIN,SCRAM-SHA-512
- KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL=PLAIN
- KAFKA_CFG_SASL_INTER_BROKER_LISTENER_NAME=CONTROLLER
- KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE=true
- KAFKA_CLIENT_USERS=user
- KAFKA_CLIENT_PASSWORDS=password
- KAFKA_CERTIFICATE_PASSWORD=supersecretpassword
- KAFKA_TLS_TYPE=JKS
volumes:
- './keystores/kafka.keystore.jks:/opt/bitnami/kafka/config/certs/kafka.keystore.jks:ro'
- './keystores/kafka.truststore.jks:/opt/bitnami/kafka/config/certs/kafka.truststore.jks:ro'
depends_on:
- zookeeper
So, my client should now be able to connect to the image with the following:
// configured with scram-sha-512 and SSL
KAFKA_USERNAME="user"
KAFKA_PASSWORD="password"
KAFKA_CLIENT_ID="whatever"
KAFKA_CONSUMER_GROUP_ID="whatever-1"
KAFKA_BROKERS="localhost:9092"
I can reach the image from my client, but the credentials are incorrect:
kafka-1 | [2023-07-25 09:50:43,680] INFO [SocketServer listenerType=BROKER, nodeId=1] Failed authentication with /172.28.0.1 (channelId=172.28.0.3:9092-172.28.0.1:44412-0) (Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-512) (org.apache.kafka.common.network.Selector)
This is the KafkaConfig info:
kafka-1 | [2023-07-25 09:35:58,277] INFO KafkaConfig values:
kafka-1 | advertised.listeners = CLIENT://kafka.integration.com:9092
kafka-1 | alter.config.policy.class.name = null
kafka-1 | alter.log.dirs.replication.quota.window.num = 11
kafka-1 | alter.log.dirs.replication.quota.window.size.seconds = 1
kafka-1 | authorizer.class.name =
kafka-1 | auto.create.topics.enable = true
kafka-1 | auto.include.jmx.reporter = true
kafka-1 | auto.leader.rebalance.enable = true
kafka-1 | background.threads = 10
kafka-1 | broker.heartbeat.interval.ms = 2000
kafka-1 | broker.id = 1
kafka-1 | broker.id.generation.enable = true
kafka-1 | broker.rack = null
kafka-1 | broker.session.timeout.ms = 9000
kafka-1 | client.quota.callback.class = null
kafka-1 | compression.type = producer
kafka-1 | connection.failed.authentication.delay.ms = 100
kafka-1 | connections.max.idle.ms = 600000
kafka-1 | connections.max.reauth.ms = 0
kafka-1 | control.plane.listener.name = null
kafka-1 | controlled.shutdown.enable = true
kafka-1 | controlled.shutdown.max.retries = 3
kafka-1 | controlled.shutdown.retry.backoff.ms = 5000
kafka-1 | controller.listener.names = CONTROLLER
kafka-1 | controller.quorum.append.linger.ms = 25
kafka-1 | controller.quorum.election.backoff.max.ms = 1000
kafka-1 | controller.quorum.election.timeout.ms = 1000
kafka-1 | controller.quorum.fetch.timeout.ms = 2000
kafka-1 | controller.quorum.request.timeout.ms = 2000
kafka-1 | controller.quorum.retry.backoff.ms = 20
kafka-1 | controller.quorum.voters = [1@localhost:9093]
kafka-1 | controller.quota.window.num = 11
kafka-1 | controller.quota.window.size.seconds = 1
kafka-1 | controller.socket.timeout.ms = 30000
kafka-1 | create.topic.policy.class.name = null
kafka-1 | default.replication.factor = 1
kafka-1 | delegation.token.expiry.check.interval.ms = 3600000
kafka-1 | delegation.token.expiry.time.ms = 86400000
kafka-1 | delegation.token.master.key = null
kafka-1 | delegation.token.max.lifetime.ms = 604800000
kafka-1 | delegation.token.secret.key = null
kafka-1 | delete.records.purgatory.purge.interval.requests = 1
kafka-1 | delete.topic.enable = true
kafka-1 | early.start.listeners = null
kafka-1 | fetch.max.bytes = 57671680
kafka-1 | fetch.purgatory.purge.interval.requests = 1000
kafka-1 | group.consumer.assignors = []
kafka-1 | group.consumer.heartbeat.interval.ms = 5000
kafka-1 | group.consumer.max.heartbeat.interval.ms = 15000
kafka-1 | group.consumer.max.session.timeout.ms = 60000
kafka-1 | group.consumer.max.size = 2147483647
kafka-1 | group.consumer.min.heartbeat.interval.ms = 5000
kafka-1 | group.consumer.min.session.timeout.ms = 45000
kafka-1 | group.consumer.session.timeout.ms = 45000
kafka-1 | group.coordinator.new.enable = false
kafka-1 | group.coordinator.threads = 1
kafka-1 | group.initial.rebalance.delay.ms = 3000
kafka-1 | group.max.session.timeout.ms = 1800000
kafka-1 | group.max.size = 2147483647
kafka-1 | group.min.session.timeout.ms = 6000
kafka-1 | initial.broker.registration.timeout.ms = 60000
kafka-1 | inter.broker.listener.name = CLIENT
kafka-1 | inter.broker.protocol.version = 3.5-IV2
kafka-1 | kafka.metrics.polling.interval.secs = 10
kafka-1 | kafka.metrics.reporters = []
kafka-1 | leader.imbalance.check.interval.seconds = 300
kafka-1 | leader.imbalance.per.broker.percentage = 10
kafka-1 | listener.security.protocol.map = CLIENT:SASL_SSL,CONTROLLER:PLAINTEXT
kafka-1 | listeners = CLIENT://:9092,CONTROLLER://:9093
kafka-1 | log.cleaner.backoff.ms = 15000
kafka-1 | log.cleaner.dedupe.buffer.size = 134217728
kafka-1 | log.cleaner.delete.retention.ms = 86400000
kafka-1 | log.cleaner.enable = true
kafka-1 | log.cleaner.io.buffer.load.factor = 0.9
kafka-1 | log.cleaner.io.buffer.size = 524288
kafka-1 | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
kafka-1 | log.cleaner.max.compaction.lag.ms = 9223372036854775807
kafka-1 | log.cleaner.min.cleanable.ratio = 0.5
kafka-1 | log.cleaner.min.compaction.lag.ms = 0
kafka-1 | log.cleaner.threads = 1
kafka-1 | log.cleanup.policy = [delete]
kafka-1 | log.dir = /tmp/kafka-logs
kafka-1 | log.dirs = /bitnami/kafka/data
kafka-1 | log.flush.interval.messages = 9223372036854775807
kafka-1 | log.flush.interval.ms = null
kafka-1 | log.flush.offset.checkpoint.interval.ms = 60000
kafka-1 | log.flush.scheduler.interval.ms = 9223372036854775807
kafka-1 | log.flush.start.offset.checkpoint.interval.ms = 60000
kafka-1 | log.index.interval.bytes = 4096
kafka-1 | log.index.size.max.bytes = 10485760
kafka-1 | log.message.downconversion.enable = true
kafka-1 | log.message.format.version = 3.0-IV1
kafka-1 | log.message.timestamp.difference.max.ms = 9223372036854775807
kafka-1 | log.message.timestamp.type = CreateTime
kafka-1 | log.preallocate = false
kafka-1 | log.retention.bytes = -1
kafka-1 | log.retention.check.interval.ms = 300000
kafka-1 | log.retention.hours = 168
kafka-1 | log.retention.minutes = null
kafka-1 | log.retention.ms = null
kafka-1 | log.roll.hours = 168
kafka-1 | log.roll.jitter.hours = 0
kafka-1 | log.roll.jitter.ms = null
kafka-1 | log.roll.ms = null
kafka-1 | log.segment.bytes = 1073741824
kafka-1 | log.segment.delete.delay.ms = 60000
kafka-1 | max.connection.creation.rate = 2147483647
kafka-1 | max.connections = 2147483647
kafka-1 | max.connections.per.ip = 2147483647
kafka-1 | max.connections.per.ip.overrides =
kafka-1 | max.incremental.fetch.session.cache.slots = 1000
kafka-1 | message.max.bytes = 1048588
kafka-1 | metadata.log.dir = null
kafka-1 | metadata.log.max.record.bytes.between.snapshots = 20971520
kafka-1 | metadata.log.max.snapshot.interval.ms = 3600000
kafka-1 | metadata.log.segment.bytes = 1073741824
kafka-1 | metadata.log.segment.min.bytes = 8388608
kafka-1 | metadata.log.segment.ms = 604800000
kafka-1 | metadata.max.idle.interval.ms = 500
kafka-1 | metadata.max.retention.bytes = 104857600
kafka-1 | metadata.max.retention.ms = 604800000
kafka-1 | metric.reporters = []
kafka-1 | metrics.num.samples = 2
kafka-1 | metrics.recording.level = INFO
kafka-1 | metrics.sample.window.ms = 30000
kafka-1 | min.insync.replicas = 1
kafka-1 | node.id = 1
kafka-1 | num.io.threads = 8
kafka-1 | num.network.threads = 3
kafka-1 | num.partitions = 1
kafka-1 | num.recovery.threads.per.data.dir = 1
kafka-1 | num.replica.alter.log.dirs.threads = null
kafka-1 | num.replica.fetchers = 1
kafka-1 | offset.metadata.max.bytes = 4096
kafka-1 | offsets.commit.required.acks = -1
kafka-1 | offsets.commit.timeout.ms = 5000
kafka-1 | offsets.load.buffer.size = 5242880
kafka-1 | offsets.retention.check.interval.ms = 600000
kafka-1 | offsets.retention.minutes = 10080
kafka-1 | offsets.topic.compression.codec = 0
kafka-1 | offsets.topic.num.partitions = 50
kafka-1 | offsets.topic.replication.factor = 1
kafka-1 | offsets.topic.segment.bytes = 104857600
kafka-1 | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
kafka-1 | password.encoder.iterations = 4096
kafka-1 | password.encoder.key.length = 128
kafka-1 | password.encoder.keyfactory.algorithm = null
kafka-1 | password.encoder.old.secret = null
kafka-1 | password.encoder.secret = null
kafka-1 | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder
kafka-1 | process.roles = [broker, controller]
kafka-1 | producer.id.expiration.check.interval.ms = 600000
kafka-1 | producer.id.expiration.ms = 86400000
kafka-1 | producer.purgatory.purge.interval.requests = 1000
kafka-1 | queued.max.request.bytes = -1
kafka-1 | queued.max.requests = 500
kafka-1 | quota.window.num = 11
kafka-1 | quota.window.size.seconds = 1
kafka-1 | remote.log.index.file.cache.total.size.bytes = 1073741824
kafka-1 | remote.log.manager.task.interval.ms = 30000
kafka-1 | remote.log.manager.task.retry.backoff.max.ms = 30000
kafka-1 | remote.log.manager.task.retry.backoff.ms = 500
kafka-1 | remote.log.manager.task.retry.jitter = 0.2
kafka-1 | remote.log.manager.thread.pool.size = 10
kafka-1 | remote.log.metadata.manager.class.name = null
kafka-1 | remote.log.metadata.manager.class.path = null
kafka-1 | remote.log.metadata.manager.impl.prefix = null
kafka-1 | remote.log.metadata.manager.listener.name = null
kafka-1 | remote.log.reader.max.pending.tasks = 100
kafka-1 | remote.log.reader.threads = 10
kafka-1 | remote.log.storage.manager.class.name = null
kafka-1 | remote.log.storage.manager.class.path = null
kafka-1 | remote.log.storage.manager.impl.prefix = null
kafka-1 | remote.log.storage.system.enable = false
kafka-1 | replica.fetch.backoff.ms = 1000
kafka-1 | replica.fetch.max.bytes = 1048576
kafka-1 | replica.fetch.min.bytes = 1
kafka-1 | replica.fetch.response.max.bytes = 10485760
kafka-1 | replica.fetch.wait.max.ms = 500
kafka-1 | replica.high.watermark.checkpoint.interval.ms = 5000
kafka-1 | replica.lag.time.max.ms = 30000
kafka-1 | replica.selector.class = null
kafka-1 | replica.socket.receive.buffer.bytes = 65536
kafka-1 | replica.socket.timeout.ms = 30000
kafka-1 | replication.quota.window.num = 11
kafka-1 | replication.quota.window.size.seconds = 1
kafka-1 | request.timeout.ms = 30000
kafka-1 | reserved.broker.max.id = 1000
kafka-1 | sasl.client.callback.handler.class = null
kafka-1 | sasl.enabled.mechanisms = [PLAIN, SCRAM-SHA-512]
kafka-1 | sasl.jaas.config = null
kafka-1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit
kafka-1 | sasl.kerberos.min.time.before.relogin = 60000
kafka-1 | sasl.kerberos.principal.to.local.rules = [DEFAULT]
kafka-1 | sasl.kerberos.service.name = null
kafka-1 | sasl.kerberos.ticket.renew.jitter = 0.05
kafka-1 | sasl.kerberos.ticket.renew.window.factor = 0.8
kafka-1 | sasl.login.callback.handler.class = null
kafka-1 | sasl.login.class = null
kafka-1 | sasl.login.connect.timeout.ms = null
kafka-1 | sasl.login.read.timeout.ms = null
kafka-1 | sasl.login.refresh.buffer.seconds = 300
kafka-1 | sasl.login.refresh.min.period.seconds = 60
kafka-1 | sasl.login.refresh.window.factor = 0.8
kafka-1 | sasl.login.refresh.window.jitter = 0.05
kafka-1 | sasl.login.retry.backoff.max.ms = 10000
kafka-1 | sasl.login.retry.backoff.ms = 100
kafka-1 | sasl.mechanism.controller.protocol = GSSAPI
kafka-1 | sasl.mechanism.inter.broker.protocol = PLAIN
kafka-1 | sasl.oauthbearer.clock.skew.seconds = 30
kafka-1 | sasl.oauthbearer.expected.audience = null
kafka-1 | sasl.oauthbearer.expected.issuer = null
kafka-1 | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
kafka-1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
kafka-1 | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
kafka-1 | sasl.oauthbearer.jwks.endpoint.url = null
kafka-1 | sasl.oauthbearer.scope.claim.name = scope
kafka-1 | sasl.oauthbearer.sub.claim.name = sub
kafka-1 | sasl.oauthbearer.token.endpoint.url = null
kafka-1 | sasl.server.callback.handler.class = null
kafka-1 | sasl.server.max.receive.size = 524288
kafka-1 | security.inter.broker.protocol = PLAINTEXT
kafka-1 | security.providers = null
kafka-1 | server.max.startup.time.ms = 9223372036854775807
kafka-1 | socket.connection.setup.timeout.max.ms = 30000
kafka-1 | socket.connection.setup.timeout.ms = 10000
kafka-1 | socket.listen.backlog.size = 50
kafka-1 | socket.receive.buffer.bytes = 102400
kafka-1 | socket.request.max.bytes = 104857600
kafka-1 | socket.send.buffer.bytes = 102400
kafka-1 | ssl.cipher.suites = []
kafka-1 | ssl.client.auth = none
kafka-1 | ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
kafka-1 | ssl.endpoint.identification.algorithm = https
kafka-1 | ssl.engine.factory.class = null
kafka-1 | ssl.key.password = [hidden]
kafka-1 | ssl.keymanager.algorithm = SunX509
kafka-1 | ssl.keystore.certificate.chain = null
kafka-1 | ssl.keystore.key = null
kafka-1 | ssl.keystore.location = /opt/bitnami/kafka/config/certs/kafka.keystore.jks
kafka-1 | ssl.keystore.password = [hidden]
kafka-1 | ssl.keystore.type = JKS
kafka-1 | ssl.principal.mapping.rules = DEFAULT
kafka-1 | ssl.protocol = TLSv1.3
kafka-1 | ssl.provider = null
kafka-1 | ssl.secure.random.implementation = null
kafka-1 | ssl.trustmanager.algorithm = PKIX
kafka-1 | ssl.truststore.certificates = null
kafka-1 | ssl.truststore.location = /opt/bitnami/kafka/config/certs/kafka.truststore.jks
kafka-1 | ssl.truststore.password = [hidden]
kafka-1 | ssl.truststore.type = JKS
kafka-1 | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000
kafka-1 | transaction.max.timeout.ms = 900000
kafka-1 | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
kafka-1 | transaction.state.log.load.buffer.size = 5242880
kafka-1 | transaction.state.log.min.isr = 1
kafka-1 | transaction.state.log.num.partitions = 50
kafka-1 | transaction.state.log.replication.factor = 1
kafka-1 | transaction.state.log.segment.bytes = 104857600
kafka-1 | transactional.id.expiration.ms = 604800000
kafka-1 | unclean.leader.election.enable = false
kafka-1 | unstable.api.versions.enable = false
kafka-1 | zookeeper.clientCnxnSocket = null
kafka-1 | zookeeper.connect = zookeeper:2181
kafka-1 | zookeeper.connection.timeout.ms = null
kafka-1 | zookeeper.max.in.flight.requests = 10
kafka-1 | zookeeper.metadata.migration.enable = false
kafka-1 | zookeeper.session.timeout.ms = 18000
kafka-1 | zookeeper.set.acl = false
kafka-1 | zookeeper.ssl.cipher.suites = null
kafka-1 | zookeeper.ssl.client.enable = false
kafka-1 | zookeeper.ssl.crl.enable = false
kafka-1 | zookeeper.ssl.enabled.protocols = null
kafka-1 | zookeeper.ssl.endpoint.identification.algorithm = HTTPS
kafka-1 | zookeeper.ssl.keystore.location = null
kafka-1 | zookeeper.ssl.keystore.password = null
kafka-1 | zookeeper.ssl.keystore.type = null
kafka-1 | zookeeper.ssl.ocsp.enable = false
kafka-1 | zookeeper.ssl.protocol = TLSv1.2
kafka-1 | zookeeper.ssl.truststore.location = null
kafka-1 | zookeeper.ssl.truststore.password = null
kafka-1 | zookeeper.ssl.truststore.type = null
Hi @mwood77!
I managed to get it working without ZK using SASL_SSL for everything except the CONTROLLER listener which is SSL only. This is using PLAIN for the SASL mechanism as it stops working for me as soon as I change to SCRAM-SHA-512.
This is my setup, maybe by combining our progress we can get it working with SCRAM-SHA-512 as well 😄
docker-compose.yaml: (example.com is replaced with an actual domain pointing to these nodes in my setup)
version: "2"
services:
kafka-1:
image: bitnami/kafka:3.5
hostname: kafka-1.example.com
ports:
- "63796:9094"
environment:
- BITNAMI_DEBUG=true
- ALLOW_PLAINTEXT_LISTENER=no
- KAFKA_CFG_LISTENERS=INTERNAL://:9092,CLIENT://:9095,CONTROLLER://:9093, EXTERNAL://:9094
- KAFKA_CFG_ADVERTISED_LISTENERS=INTERNAL://kafka-1.example.com:9092,CLIENT://:9095,EXTERNAL://kafka-1.example.com:63796
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=INTERNAL:SASL_SSL,CLIENT:SASL_SSL,CONTROLLER:SSL,EXTERNAL:SASL_SSL
- KAFKA_CFG_INTER_BROKER_LISTENER_NAME=INTERNAL
- KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL=PLAIN
- KAFKA_CFG_SASL_ENABLED_MECHANISMS=PLAIN
- KAFKA_CERTIFICATE_PASSWORD=<password>
- KAFKA_TLS_TYPE=JKS
- KAFKA_CLIENT_USERS=myuser
- KAFKA_CLIENT_PASSWORDS=mypassword
- KAFKA_INTER_BROKER_USER=mybrokeruser
- KAFKA_INTER_BROKER_PASSWORD=mybrokerpassword
- KAFKA_CFG_NODE_ID=0
- [email protected]:9093,[email protected]:9093,[email protected]:9093
- KAFKA_KRAFT_CLUSTER_ID=abcdefghijklmnopqrstuv
volumes:
- "kafka_1_data:/bitnami/kafka"
- "./kafka-1.keystore.jks:/opt/bitnami/kafka/config/certs/kafka.keystore.jks:ro"
- "./truststore/kafka.truststore.jks:/opt/bitnami/kafka/config/certs/kafka.truststore.jks:ro"
kafka-2:
image: bitnami/kafka:3.5
hostname: kafka-2.example.com
ports:
- "63797:9094"
environment:
- BITNAMI_DEBUG=false
- ALLOW_PLAINTEXT_LISTENER=no
- KAFKA_CFG_LISTENERS=INTERNAL://:9092,CLIENT://:9095,CONTROLLER://:9093, EXTERNAL://:9094
- KAFKA_CFG_ADVERTISED_LISTENERS=INTERNAL://kafka-2.example.com:9092,CLIENT://:9095,EXTERNAL://kafka-2.example.com:63797
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=INTERNAL:SASL_SSL,CLIENT:SASL_SSL,CONTROLLER:SSL,EXTERNAL:SASL_SSL
- KAFKA_CFG_INTER_BROKER_LISTENER_NAME=INTERNAL
- KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL=PLAIN
- KAFKA_CFG_SASL_ENABLED_MECHANISMS=PLAIN
- KAFKA_CERTIFICATE_PASSWORD=<password>
- KAFKA_TLS_TYPE=JKS
- KAFKA_CLIENT_USERS=myuser
- KAFKA_CLIENT_PASSWORDS=mypassword
- KAFKA_INTER_BROKER_USER=mybrokeruser
- KAFKA_INTER_BROKER_PASSWORD=mybrokeruser
- KAFKA_CFG_NODE_ID=1
- [email protected]:9093,[email protected]:9093,[email protected]:9093
- KAFKA_KRAFT_CLUSTER_ID=abcdefghijklmnopqrstuv
volumes:
- "kafka_2_data:/bitnami/kafka"
- "./kafka-2.keystore.jks:/opt/bitnami/kafka/config/certs/kafka.keystore.jks:ro"
- "./truststore/kafka.truststore.jks:/opt/bitnami/kafka/config/certs/kafka.truststore.jks:ro"
kafka-3:
image: bitnami/kafka:3.5
hostname: kafka-3.example.com
ports:
- "63798:9094"
environment:
- BITNAMI_DEBUG=false
- ALLOW_PLAINTEXT_LISTENER=no
- KAFKA_CFG_LISTENERS=INTERNAL://:9092,CONTROLLER://:9093,CLIENT://:9095,EXTERNAL://:9094
- KAFKA_CFG_ADVERTISED_LISTENERS=INTERNAL://kafka-3.example.com:9092,CLIENT://:9095,EXTERNAL://kafka-3.example.com:63798
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=INTERNAL:SASL_SSL,CLIENT:SASL_SSL,CONTROLLER:SSL,EXTERNAL:SASL_SSL
- KAFKA_CFG_INTER_BROKER_LISTENER_NAME=INTERNAL
- KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL=PLAIN
- KAFKA_CFG_SASL_ENABLED_MECHANISMS=PLAIN
- KAFKA_CERTIFICATE_PASSWORD=<password>
- KAFKA_TLS_TYPE=JKS
- KAFKA_CLIENT_USERS=myuser
- KAFKA_CLIENT_PASSWORDS=mypassword
- KAFKA_INTER_BROKER_USER=mybrokeruser
- KAFKA_INTER_BROKER_PASSWORD=mybrokeruser
- KAFKA_CFG_NODE_ID=2
- [email protected]:9093,[email protected]:9093,[email protected]:9093
- KAFKA_KRAFT_CLUSTER_ID=abcdefghijklmnopqrstuv
volumes:
- "kafka_3_data:/bitnami/kafka"
- "./kafka-3.keystore.jks:/opt/bitnami/kafka/config/certs/kafka.keystore.jks:ro"
- "./truststore/kafka.truststore.jks:/opt/bitnami/kafka/config/certs/kafka.truststore.jks:ro"
volumes:
kafka_1_data:
driver: local
kafka_2_data:
driver: local
kafka_3_data:
driver: local
And then on the machine with producer/consumers I have this kafka.properties file:
group.id=<groupname>
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
#sasl.mechanism=SCRAM-SHA-512
ssl.enabled.protocols=TLSv1.3,TLSv1.2,TLSv1.1,TLSv1
ssl.truststore.location=./truststore/kafka.truststore.jks
ssl.truststore.password=<password>
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="myuser" password="mypassword";
;
Producer started like this: kafka-console-producer.sh --bootstrap-server kafka-1.example.com:63796 --producer.config kafka.properties --topic=mytopic
And consumer like this: kafka-console-consumer.sh --bootstrap-server kafka-1.example.com:63796 --consumer.config kafka.properties --topic=mytopic
Missing steps:
- SASL for CONTROLLER listener
- SCRAM-SHA-512 for the SASL method
- Creating ACLs
OK it looks like SCRAM might not be supported in KRaft yet. I created a PR with which I was able to get this working: #42416.
I've got ACLs up and running as well now. The only thing missing is SASL with SCRAM for the Controller listener (I managed to get it working for SASL PLAIN). I posted on the Apache mailing list here: https://lists.apache.org/thread/p2d4fj5ytnt83kwg5vffox2fr7dzkxyn
Once I've got that up and running I'll post a sample in case it helps someone else.
Hi @Rablet!
We have released a new version of the bitnami/kafka image that refactors the initialization logic and aims to improve the Kafka KRaft user experience.
Some new features that you may be interested in:
- Adds support for SCRAM in Kafka 3.5+
- Controller listener now supports SASL
In case it helps you, I have updated your docker-compose yaml with the new settings to deploy a Kafka KRaft cluster with SASL_SSL, SCRAM and ACL:
version: "2"
services:
kafka-1:
image: bitnami/kafka:3.5
hostname: kafka-1.example.com
ports:
- "63796:9094"
environment:
# KRaft settings
- KAFKA_CFG_NODE_ID=0
- KAFKA_CFG_PROCESS_ROLES=controller,broker
- [email protected]:9093,[email protected]:9093,[email protected]:9093
- KAFKA_KRAFT_CLUSTER_ID=abcdefghijklmnopqrstuv
# Listeners settings
- KAFKA_CFG_LISTENERS=INTERNAL://:9092,CLIENT://:9095,CONTROLLER://:9093, EXTERNAL://:9094
- KAFKA_CFG_ADVERTISED_LISTENERS=INTERNAL://kafka-1.example.com:9092,CLIENT://:9095,EXTERNAL://kafka-1.example.com:63796
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=INTERNAL:SASL_SSL,CLIENT:SASL_SSL,CONTROLLER:SASL_SSL,EXTERNAL:SASL_SSL
- KAFKA_CFG_INTER_BROKER_LISTENER_NAME=INTERNAL
- KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL=PLAIN
- KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER
- KAFKA_CFG_SASL_MECHANISM_CONTROLLER_PROTOCOL=PLAIN
# SASL settings
- KAFKA_CLIENT_USERS=user
- KAFKA_CLIENT_PASSWORDS=password
- KAFKA_CLIENT_LISTENER_NAME=CLIENT
- KAFKA_CONTROLLER_USER=controller_user
- KAFKA_CONTROLLER_PASSWORD=controller_password
- KAFKA_INTER_BROKER_USER=inter_broker_user
- KAFKA_INTER_BROKER_PASSWORD=inter_broker_password
# SSL settings
- KAFKA_CERTIFICATE_PASSWORD=my_pass
- KAFKA_TLS_TYPE=JKS
# ACL
- KAFKA_CFG_SUPER_USERS=User:user;User:controller_user;
- KAFKA_CFG_ALLOW_EVERYONE_IF_NO_ACL_FOUND="true"
- KAFKA_CFG_AUTHORIZER_CLASS_NAME=org.apache.kafka.metadata.authorizer.StandardAuthorizer
- KAFKA_CFG_EARLY_START_LISTENERS=CONTROLLER
volumes:
- "kafka_1_data:/bitnami/kafka"
- "./kafka-1.keystore.jks:/opt/bitnami/kafka/config/certs/kafka.keystore.jks:ro"
- "./truststore/kafka.truststore.jks:/opt/bitnami/kafka/config/certs/kafka.truststore.jks:ro"
kafka-2:
image: bitnami/kafka:3.5
hostname: kafka-2.example.com
ports:
- "63797:9094"
environment:
# KRaft settings
- KAFKA_CFG_NODE_ID=1
- KAFKA_CFG_PROCESS_ROLES=controller,broker
- [email protected]:9093,[email protected]:9093,[email protected]:9093
- KAFKA_KRAFT_CLUSTER_ID=abcdefghijklmnopqrstuv
# Listeners settings
- KAFKA_CFG_LISTENERS=INTERNAL://:9092,CLIENT://:9095,CONTROLLER://:9093, EXTERNAL://:9094
- KAFKA_CFG_ADVERTISED_LISTENERS=INTERNAL://kafka-1.example.com:9092,CLIENT://:9095,EXTERNAL://kafka-1.example.com:63796
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=INTERNAL:SASL_SSL,CLIENT:SASL_SSL,CONTROLLER:SASL_SSL,EXTERNAL:SASL_SSL
- KAFKA_CFG_INTER_BROKER_LISTENER_NAME=INTERNAL
- KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL=PLAIN
- KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER
- KAFKA_CFG_SASL_MECHANISM_CONTROLLER_PROTOCOL=PLAIN
# SASL settings
- KAFKA_CLIENT_USERS=user
- KAFKA_CLIENT_PASSWORDS=password
- KAFKA_CLIENT_LISTENER_NAME=CLIENT
- KAFKA_CONTROLLER_USER=controller_user
- KAFKA_CONTROLLER_PASSWORD=controller_password
- KAFKA_INTER_BROKER_USER=inter_broker_user
- KAFKA_INTER_BROKER_PASSWORD=inter_broker_password
# SSL settings
- KAFKA_CERTIFICATE_PASSWORD=my_pass
- KAFKA_TLS_TYPE=JKS
# ACL
- KAFKA_CFG_SUPER_USERS=User:user;User:controller_user;
- KAFKA_CFG_ALLOW_EVERYONE_IF_NO_ACL_FOUND="true"
- KAFKA_CFG_AUTHORIZER_CLASS_NAME=org.apache.kafka.metadata.authorizer.StandardAuthorizer
- KAFKA_CFG_EARLY_START_LISTENERS=CONTROLLER
volumes:
- "kafka_2_data:/bitnami/kafka"
- "./kafka-2.keystore.jks:/opt/bitnami/kafka/config/certs/kafka.keystore.jks:ro"
- "./truststore/kafka.truststore.jks:/opt/bitnami/kafka/config/certs/kafka.truststore.jks:ro"
kafka-3:
image: bitnami/kafka:3.5
hostname: kafka-3.example.com
ports:
- "63798:9094"
environment:
# KRaft settings
- KAFKA_CFG_NODE_ID=2
- KAFKA_CFG_PROCESS_ROLES=controller,broker
- [email protected]:9093,[email protected]:9093,[email protected]:9093
- KAFKA_KRAFT_CLUSTER_ID=abcdefghijklmnopqrstuv
# Listeners settings
- KAFKA_CFG_LISTENERS=INTERNAL://:9092,CLIENT://:9095,CONTROLLER://:9093, EXTERNAL://:9094
- KAFKA_CFG_ADVERTISED_LISTENERS=INTERNAL://kafka-1.example.com:9092,CLIENT://:9095,EXTERNAL://kafka-1.example.com:63796
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=INTERNAL:SASL_SSL,CLIENT:SASL_SSL,CONTROLLER:SASL_SSL,EXTERNAL:SASL_SSL
- KAFKA_CFG_INTER_BROKER_LISTENER_NAME=INTERNAL
- KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL=PLAIN
- KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER
- KAFKA_CFG_SASL_MECHANISM_CONTROLLER_PROTOCOL=PLAIN
# SASL settings
- KAFKA_CLIENT_USERS=user
- KAFKA_CLIENT_PASSWORDS=password
- KAFKA_CLIENT_LISTENER_NAME=CLIENT
- KAFKA_CONTROLLER_USER=controller_user
- KAFKA_CONTROLLER_PASSWORD=controller_password
- KAFKA_INTER_BROKER_USER=inter_broker_user
- KAFKA_INTER_BROKER_PASSWORD=inter_broker_password
# SSL settings
- KAFKA_CERTIFICATE_PASSWORD=my_pass
- KAFKA_TLS_TYPE=JKS
# ACL
- KAFKA_CFG_SUPER_USERS=User:user;User:controller_user;
- KAFKA_CFG_ALLOW_EVERYONE_IF_NO_ACL_FOUND="true"
- KAFKA_CFG_AUTHORIZER_CLASS_NAME=org.apache.kafka.metadata.authorizer.StandardAuthorizer
- KAFKA_CFG_EARLY_START_LISTENERS=CONTROLLER
volumes:
- "kafka_3_data:/bitnami/kafka"
- "./kafka-3.keystore.jks:/opt/bitnami/kafka/config/certs/kafka.keystore.jks:ro"
- "./truststore/kafka.truststore.jks:/opt/bitnami/kafka/config/certs/kafka.truststore.jks:ro"
volumes:
kafka_1_data:
driver: local
kafka_2_data:
driver: local
kafka_3_data:
driver: local
Hi @migruiz4! Thank you for that.
If I change the SASL mechanism from PLAIN to SCRAM-SHA-512 I keep getting an authentication error. I tried adding some extra logging and it looks like it is adding all the users:
--config /opt/bitnami/kafka/config/server.properties --ignore-formatted --cluster-id abcdefghijklmnopqrstug --add-scram SCRAM-SHA-512=[name=myuser,password=mypassword] --add-scram SCRAM-SHA-512=[name=root,password=password] --add-scram SCRAM-SHA-512=[name=myconsumeruser,password=myconsumerpassword] --add-scram SCRAM-SHA-512=[name=mybrokeruser,password=mybrokerpassword] --add-scram SCRAM-SHA-512=[name=controller_user,password=controller_password]
And the injected controller listener config looks correct to me: listener.name.controller.scram-sha-512.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="controller_user" password="controller_password";
SCRAM-SHA-512 does work for the other listeners.
I also noticed there's currently no way of setting the ssl.client.auth per listener (they all use the KAFKA_TLS_INTER_BROKER_AUTH config). I created #43135 for this.
Hi @Rablet,
Thanks for the feedback! I have been able to reproduce the issue when setting KAFKA_CFG_SASL_MECHANISM_CONTROLLER_PROTOCOL=SCRAM-SHA-512 although it seems a bit odd.
I'm also new to KRaft SCRAM, so it may be possible it is due to some misconfiguration or maybe a Kafka bug upstream.
These are my findings so far, same as you report:
-
The kraft storage is being initialized using the following command:
/opt/bitnami/kafka/bin/kafka-storage.sh format --config /opt/bitnami/kafka/config/server.properties --ignore-formatted --cluster-id abcdefghijklmnopqrstuv --add-scram SCRAM-SHA-256=[name=user,password=password] --add-scram SCRAM-SHA-512=[name=user,password=password] --add-scram SCRAM-SHA-512=[name=inter_broker_user,password=inter_broker_password] --add-scram SCRAM-SHA-512=[name=controller_user,password=controller_password] -
Kraft cluster fails to bootstrap due to 'authentication error':
kafka-kafka-2-1 | [2023-08-01 11:12:45,295] ERROR [kafka-1-raft-outbound-request-thread]: Failed to send the following request due to authentication error: ClientRequest(expectResponse=true, callback=kafka.raft.KafkaNetworkChannel$$Lambda$687/0x00007f27d443fc60@2aba6075, destination=0, correlationId=129, clientId=raft-client-1, createdTimeMs=1690888364960, requestBuilder=VoteRequestData(clusterId='abcdefghijklmnopqrstug', topics=[TopicData(topicName='__cluster_metadata', partitions=[PartitionData(partitionIndex=0, candidateEpoch=4, candidateId=1, lastOffsetEpoch=0, lastOffset=0)])])) (kafka.raft.RaftSendThread) -
Kafka settings for controller listener are:
listener.name.controller.sasl.enabled.mechanisms=SCRAM-SHA-512 listener.name.controller.scram-sha-512.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="controller_user" password="controller_password";
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.
Linked to:
- https://github.com/scram-sasl/info/issues/1
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.
A recent activity.
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.
A new recent activity.
@bitnami: Have you progressed on it?
Hi @Neustradamus,
We have investigated this issue deeper and haven't been able to resolve it.
The issue seems to be related to the command kafka-storage.sh --add-scram and Controller quorum formation using SCRAM.
We think the issue is probably a bug with Kafka, as from a user perspective the logic here is quite simple and easy to reproduce:
- Configure controller listener to use SASL_PLAINTEXT with SCRAM.
- Initialize the KRaft storage with SCRAM credentials for 'controller_user'.
- Start Kafka and see how Controllers fail to authenticate with each other.
As a side note, this same method works for Inter-broker communications and Client communications, and only affects Controller communications when using SCRAM. Configuring client and inter-broker listeners with SASL+SCRAM and controller listener with SASL+PLAIN protocol is still posible.
Information to reproduce the issue consistently:
-
I used the following docker-compose:
version: "2" services: kafka-0: image: bitnami/kafka:3.5 environment: # KRaft settings - BITNAMI_DEBUG=true - KAFKA_CFG_NODE_ID=0 - KAFKA_CFG_PROCESS_ROLES=controller,broker - KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=0@kafka-0:9093,1@kafka-1:9093 - KAFKA_KRAFT_CLUSTER_ID=abcdefghijklmnopqrstuv # Listeners settings - KAFKA_CFG_INTER_BROKER_LISTENER_NAME=INTERNAL - KAFKA_CFG_LISTENERS=INTERNAL://:9092,CLIENT://:9091,CONTROLLER://:9093 - KAFKA_CFG_ADVERTISED_LISTENERS=INTERNAL://kafka-0:9092,CLIENT://:9091 - KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=INTERNAL:PLAINTEXT,CLIENT:PLAINTEXT,CONTROLLER:SASL_PLAINTEXT - KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER - KAFKA_CFG_SASL_MECHANISM_CONTROLLER_PROTOCOL=SCRAM-SHA-512 # SASL settings - KAFKA_CONTROLLER_USER=controller_user - KAFKA_CONTROLLER_PASSWORD=controller_password kafka-1: image: bitnami/kafka:3.5 environment: # KRaft settings - BITNAMI_DEBUG=true - KAFKA_CFG_NODE_ID=1 - KAFKA_CFG_PROCESS_ROLES=controller,broker - KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=0@kafka-0:9093,1@kafka-1:9093 - KAFKA_KRAFT_CLUSTER_ID=abcdefghijklmnopqrstuv # Listeners settings qLISTENER_NAME=INTERNAL - KAFKA_CFG_LISTENERS=INTERNAL://:9092,CLIENT://:9091,CONTROLLER://:9093 - KAFKA_CFG_ADVERTISED_LISTENERS=INTERNAL://kafka-0:9092,CLIENT://:9091 - KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=INTERNAL:PLAINTEXT,CLIENT:PLAINTEXT,CONTROLLER:SASL_PLAINTEXT - KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER - KAFKA_CFG_SASL_MECHANISM_CONTROLLER_PROTOCOL=SCRAM-SHA-512 # SASL settings - KAFKA_CONTROLLER_USER=controller_user - KAFKA_CONTROLLER_PASSWORD=controller_password -
KRaft storage command:
/opt/bitnami/kafka/bin/kafka-storage.sh format --config /opt/bitnami/kafka/config/server.properties --ignore-formatted --cluster-id abcdefghijklmnopqrstuv --add-scram SCRAM-SHA-512=[name=controller_user,password=controller_password] -
Kafka configuration
server.properties:listeners=INTERNAL://:9092,CLIENT://:9091,CONTROLLER://:9093 advertised.listeners=INTERNAL://kafka-0:9092,CLIENT://:9091 listener.security.protocol.map=INTERNAL:PLAINTEXT,CLIENT:PLAINTEXT,CONTROLLER:SASL_PLAINTEXT num.network.threads=3 num.io.threads=8 socket.send.buffer.bytes=102400 socket.receive.buffer.bytes=102400 socket.request.max.bytes=104857600 log.dirs=/bitnami/kafka/data num.partitions=1 num.recovery.threads.per.data.dir=1 offsets.topic.replication.factor=1 transaction.state.log.replication.factor=1 transaction.state.log.min.isr=1 log.retention.hours=168 log.retention.check.interval.ms=300000 controller.listener.names=CONTROLLER controller.quorum.voters=0@kafka-0:9093,1@kafka-1:9093 inter.broker.listener.name=INTERNAL node.id=0 process.roles=controller,broker sasl.enabled.mechanisms=PLAIN,SCRAM-SHA-256,SCRAM-SHA-512 sasl.mechanism.controller.protocol=SCRAM-SHA-512 listener.name.controller.sasl.enabled.mechanisms=SCRAM-SHA-512 listener.name.controller.scram-sha-512.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="controller_user" password="controller_password";
Mounting the configuration and running the container manually also reproduces the issue. Using the following docker-compose:
version: "2"
services:
kafka-0:
image: bitnami/kafka:3.5
command: ["tail", "-f", "/dev/null"]
environment:
# KRaft settings
- BITNAMI_DEBUG=true
- KAFKA_SKIP_KRAFT_STORAGE_INIT=yes
volumes:
- "./server1.properties:/opt/bitnami/kafka/config/server.properties"
kafka-1:
image: bitnami/kafka:3.5
command: ["tail", "-f", "/dev/null"]
environment:
# KRaft settings
- BITNAMI_DEBUG=true
- KAFKA_SKIP_KRAFT_STORAGE_INIT=yes
volumes:
- "./server1.properties:/opt/bitnami/kafka/config/server.properties"
And, inside both containers, run the following command:
# Manually init storage
/opt/bitnami/kafka/bin/kafka-storage.sh format --config /opt/bitnami/kafka/config/server.properties --ignore-formatted --cluster-id abcdefghijklmnopqrstuv --add-scram SCRAM-SHA-512=[name=controller_user,password=controller_password]
# Start Kafka
/opt/bitnami/scripts/kafka/entrypoint.sh /opt/bitnami/scripts/kafka/run.sh
I ended up creating a docker-compose with six nodes: 3 broker, 3 controllers. SASL_SSL enabled on the brokers and controllers using SCRAM on brokers but PLAIN on the controller.
Is this over the top for the sample docs? If not I can create a PR.
Hi @Rablet,
I have reported the controller + SCRAM issue in Kafka's project here: https://issues.apache.org/jira/browse/KAFKA-15513
Is this over the top for the sample docs? If not I can create a PR. It would be great if you could contribute with a PR to help other users not run into this issue, although I would appreciate it if this issue was referenced on a note.
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.
How to resolve this? Please let me know. Thank you.
Hi @amolfnal,
As reported to Kafka, controller-to-controller communications still do not support SCRAM, only the PLAIN mechanism is supported.
Setting KAFKA_CFG_SASL_MECHANISM_CONTROLLER_PROTOCOL=PLAIN should fix the issue until SCRAM support is added for controller-to-controller communications.