containers
containers copied to clipboard
[bitnami/kfaka] Kraft Cluster Can Be Secured But Not With SCRAM
Name and Version
bitnami/kafka:3.2
What steps will reproduce the bug?
- Modify libkafka.sh to remove step that tries to create SASL users in Zookeeper. Also need to fix a bug in configuring the KafkaClient JAAS element when using SCRAM (the ScramLoginModule does not exist in the plain package)
./kafka/Dockerfile
FROM docker.io/bitnami/kafka:3.2
USER 0
RUN apt-get update && \
apt-get install -y jq openssl dos2unix netcat dnsutils && \
apt-get clean && \
sed -i.bak '/\[\[ "\$KAFKA_CFG_SASL_ENABLED_MECHANISMS" =~ "SCRAM" \]\] && kafka_create_sasl_scram_zookeeper_users/c\export KAFKA_OPTS="-Djava.security.auth.login.config=\${KAFKA_CONF_DIR}/kafka_jaas.conf"' /opt/bitnami/scripts/libkafka.sh && \
sed -i.bak '/org.apache.kafka.common.security.plain.ScramLoginModule required/c\ org.apache.kafka.common.security.scram.ScramLoginModule required' /opt/bitnami/scripts/libkafka.sh
COPY ./scripts /kafka-scripts
RUN chmod 777 -R /kafka-scripts && \
find /kafka-scripts -name '*.sh' | xargs dos2unix
USER 1001
WORKDIR /kafka-scripts
ENTRYPOINT ["./entrypoint.sh"]
./kafka/kafka-scripts/entrypoint
#!/bin/bash
. /opt/bitnami/scripts/libvalidations.sh
[[ -z "$KAFKA_INTER_BROKER_USER" ]] && export KAFKA_INTER_BROKER_USER=broker
[[ -z "$KAFKA_INTER_BROKER_PASSWORD" ]] && export KAFKA_INTER_BROKER_PASSWORD=brokerPassword
[[ -z "$KAFKA_CFG_SASL_ENABLED_MECHANISMS" ]] && export KAFKA_CFG_SASL_ENABLED_MECHANISMS=${APP_KAFKA_CLIENT_SASL_MECHANISM:-SCRAM-SHA-512}
[[ -z "$KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL" ]] && export KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL=${KAFKA_CFG_SASL_ENABLED_MECHANISMS}
export NODE_COUNT=${NODE_COUNT:-1}
export CLIENT_PORT=${CLIENT_PORT:-9092}
export INTERNAL_PORT=${INTERNAL_PORT:-9093}
export KAFKA_CFG_BROKER_ID=${KAFKA_CFG_BROKER_ID:-1}
export KAFKA_INTER_BROKER_LISTENER_NAME=INTERNAL
export KAFKA_CLIENT_LISTENER_NAME=CLIENT
export KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=${KAFKA_INTER_BROKER_LISTENER_NAME}:${KAFKA_INTER_BROKER_LISTENER_PROTOCOL:-SASL_PLAINTEXT},${KAFKA_CLIENT_LISTENER_NAME}:${APP_KAFKA_CLIENT_PROTOCOL:-SASL_PLAINTEXT}
export KAFKA_CFG_LISTENERS="${KAFKA_INTER_BROKER_LISTENER_NAME}://:${INTERNAL_PORT},${KAFKA_CLIENT_LISTENER_NAME}://:${CLIENT_PORT}"
export KAFKA_CFG_ADVERTISED_LISTENERS="${KAFKA_INTER_BROKER_LISTENER_NAME}://kubernetes.docker.internal:${INTERNAL_PORT},${KAFKA_CLIENT_LISTENER_NAME}://kubernetes.docker.internal:${CLIENT_PORT}"
export KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE=true
export KAFKA_CFG_DEFAULT_REPLICATION_FACTOR=$NODE_COUNT
export KAFKA_CFG_OFFSETS_TOPIC_REPLICATION_FACTOR=$NODE_COUNT
export KAFKA_CFG_TRANSACTION_STATE_LOG_REPLICATION_FACTOR=$NODE_COUNT
export KAFKA_CFG_TRANSACTION_STATE_LOG_MIN_ISR=${KAFKA_CFG_TRANSACTION_STATE_LOG_MIN_ISR:-2}
export KAFKA_CFG_NUM_PARTITIONS=${KAFKA_CFG_NUM_PARTITIONS:-10}
if is_boolean_yes "$KAFKA_ENABLE_KRAFT"; then
[[ -z "$KAFKA_CFG_NODE_ID" ]] && export KAFKA_CFG_NODE_ID="${KAFKA_CFG_BROKER_ID}"
export KAFKA_CFG_PROCESS_ROLES=broker,controller
export CONTROLLER_PORT=${CONTROLLER_PORT:-2181}
export KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER
export KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=${KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP},${KAFKA_CFG_CONTROLLER_LISTENER_NAMES}:PLAINTEXT
export KAFKA_CFG_LISTENERS="${KAFKA_CFG_LISTENERS},${KAFKA_CFG_CONTROLLER_LISTENER_NAMES}://:${CONTROLLER_PORT}"
fi
if [[ -z "$KAFKA_CLIENT_USERS" && -z "$KAFKA_CLIENT_PASSWORDS" ]]; then
export KAFKA_CLIENT_USERS=$KAFKA_INTER_BROKER_USER
export KAFKA_CLIENT_PASSWORDS=$KAFKA_INTER_BROKER_PASSWORD
for user_key in $(printenv | grep APP_KAFKA.*USER | grep -o -P '(?<=APP_KAFKA).*(?=USER)')
do
userVar='$'"APP_KAFKA"$user_key"USER"
passwordVar='$'"APP_KAFKA"$user_key"PASSWORD"
export KAFKA_CLIENT_USERS=$KAFKA_CLIENT_USERS,$(eval echo $userVar)
export KAFKA_CLIENT_PASSWORDS=$KAFKA_CLIENT_PASSWORDS,$(eval echo $passwordVar)
done
fi
echo "users: ${KAFKA_CLIENT_USERS}"
if [[ -n "$NODESET_FQDN" ]]; then
current_node_id="${HOSTNAME: -1}"
export NODE_FQDN="$HOSTNAME.$NODESET_FQDN"
export KAFKA_CFG_ADVERTISED_LISTENERS="${KAFKA_INTER_BROKER_LISTENER_NAME}://${NODE_FQDN}:${INTERNAL_PORT},${KAFKA_CLIENT_LISTENER_NAME}://${NODE_FQDN}:${CLIENT_PORT}"
export KAFKA_CFG_BROKER_ID="${current_node_id}"
if is_boolean_yes "$KAFKA_ENABLE_KRAFT"; then
export KAFKA_CFG_NODE_ID="${current_node_id}"
for node_server_id in $(seq $NODE_COUNT);
do
node_id=$((node_server_id-1))
[[ -n "${KAFKA_CFG_CONTROLLER_QUORUM_VOTERS}" ]] && export KAFKA_CFG_CONTROLLER_QUORUM_VOTERS = "${KAFKA_CFG_CONTROLLER_QUORUM_VOTERS},"
export KAFKA_CFG_CONTROLLER_QUORUM_VOTERS="${KAFKA_CFG_CONTROLLER_QUORUM_VOTERS}${node_id}@${HOSTNAME::-1}$node_id.$NODESET_FQDN:$CONTROLLER_PORT"
done
fi
fi
exec /opt/bitnami/scripts/kafka/entrypoint.sh /opt/bitnami/scripts/kafka/run.sh
- Configure Kafka Kraft Cluster
./docker-compose.yaml
version: '3.9'
networks:
infra:
driver: bridge
services:
kafka-1:
build: ./kafka
networks:
- infra
ports:
- 9092:9092
- 9093:9093
- 2181:2181
extra_hosts:
- "kubernetes.docker.internal:host-gateway"
environment:
- KAFKA_ENABLE_KRAFT=yes
- KAFKA_KRAFT_CLUSTER_ID=Z21R2idcSLiO8yU0KKOTxA
- KAFKA_CLIENT_USERS=local
- KAFKA_CLIENT_PASSWORDS=localPassword
- KAFKA_CFG_SASL_ENABLED_MECHANISMS=PLAIN,SCRAM-SHA-512
- KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL=SCRAM-SHA-512
- [email protected]:2181,[email protected]:2182,[email protected]:2183
- KAFKA_CFG_BROKER_ID=1
- KAFKA_CFG_NODE_ID=1
- KAFKA_CFG_LOG_RETENTION_HOURS=1
- NODE_COUNT=3
- CLIENT_PORT=9092
- INTERNAL_PORT=9093
- CONTROLLER_PORT=2181
- APP_KAFKA_CLIENT_PROTOCOL=${APP_KAFKA_CLIENT_PROTOCOL:-SASL_PLAINTEXT}
healthcheck:
test: /bin/bash -c 'nc -z localhost 9092'
interval: 10s
timeout: 5s
retries: 9
kafka-2:
build: ./kafka
networks:
- infra
ports:
- 9094:9094
- 9095:9095
- 2182:2182
extra_hosts:
- "kubernetes.docker.internal:host-gateway"
environment:
- KAFKA_ENABLE_KRAFT=yes
- KAFKA_KRAFT_CLUSTER_ID=Z21R2idcSLiO8yU0KKOTxA
- KAFKA_CLIENT_USERS=local
- KAFKA_CLIENT_PASSWORDS=localPassword
- KAFKA_CFG_SASL_ENABLED_MECHANISMS=PLAIN,SCRAM-SHA-512
- KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL=SCRAM-SHA-512
- [email protected]:2181,[email protected]:2182,[email protected]:2183
- KAFKA_CFG_BROKER_ID=2
- KAFKA_CFG_NODE_ID=2
- KAFKA_CFG_LOG_RETENTION_HOURS=1
- NODE_COUNT=3
- CLIENT_PORT=9094
- INTERNAL_PORT=9095
- CONTROLLER_PORT=2182
- APP_KAFKA_CLIENT_PROTOCOL=${APP_KAFKA_CLIENT_PROTOCOL:-SASL_PLAINTEXT}
healthcheck:
test: /bin/bash -c 'nc -z localhost 9094'
interval: 10s
timeout: 5s
retries: 9
kafka-3:
build: ./kafka
networks:
- infra
ports:
- 9096:9096
- 9097:9097
- 2183:2183
extra_hosts:
- "kubernetes.docker.internal:host-gateway"
environment:
- KAFKA_ENABLE_KRAFT=yes
- KAFKA_KRAFT_CLUSTER_ID=Z21R2idcSLiO8yU0KKOTxA
- KAFKA_CLIENT_USERS=local
- KAFKA_CLIENT_PASSWORDS=localPassword
- KAFKA_CFG_SASL_ENABLED_MECHANISMS=PLAIN,SCRAM-SHA-512
- KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL=SCRAM-SHA-512
- [email protected]:2181,[email protected]:2182,[email protected]:2183
- KAFKA_CFG_BROKER_ID=3
- KAFKA_CFG_NODE_ID=3
- KAFKA_CFG_LOG_RETENTION_HOURS=1
- NODE_COUNT=3
- CLIENT_PORT=9096
- INTERNAL_PORT=9097
- CONTROLLER_PORT=2183
- APP_KAFKA_CLIENT_PROTOCOL=${APP_KAFKA_CLIENT_PROTOCOL:-SASL_PLAINTEXT}
healthcheck:
test: /bin/bash -c 'nc -z localhost 9096'
interval: 10s
timeout: 5s
retries: 9
- Cluster will start but all attempts to login to the cluster will fail. Changing KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL to PLAIN and configuring the client to use the PLAIN sasl mechanism will work just fine. However, it will fail if either of those are configured with a SCRAM sasl mechanism.
ERROR:kafka.conn:<BrokerConnection node_id=bootstrap-1 host=kubernetes.docker.internal:9092 <authenticating> [IPv4 ('192.168.65.2', 9092)]>: Error receiving reply from server
service-alerts-cli-1 | Traceback (most recent call last):
service-alerts-cli-1 | File "/usr/src/app/packages/kafka/conn.py", line 645, in _try_authenticate_plain
service-alerts-cli-1 | data = self._recv_bytes_blocking(4)
service-alerts-cli-1 | File "/usr/src/app/packages/kafka/conn.py", line 616, in _recv_bytes_blocking
service-alerts-cli-1 | raise ConnectionError('Connection reset during recv')
service-alerts-cli-1 | ConnectionError: Connection reset during recv
What is the expected behavior?
That the server would support SCRAM authentication.
What do you see instead?
Secure authentication only works when using the PLAIN SASL mechanism.
Additional information
I admit that this may be a case of Kraft not being ready for SCRAM just yet. I am simply unable to determine if that is the case. The Kafka release notes indicate that the only thing related to SCRAM that is still pending is the ability to create users via the administrative API but that doesn't necessarily preclude the ability to configure SCRAM using the JAAS file to my understanding.
I am completely fine if this is a known limitation of Kraft at this point. I simply couldn't find anything that says that SCRAM flat-out doesn't work at all just yet.
Hi,
About the limitation of Kraft and SCRAM, my advice would be asking directly to the Kafka devs to see if this is cofigurable. If so, then we could see what needs to be modified in the container logic in order to support it.
Understood. Do you happen to know how to get in touch with that community? Their issue board is deactivated on GitHub and it doesn't appear that they have a gitter channel or anything along those lines from my own internet sleuthing.
Hi,
Could you try this? https://forum.confluent.io/
Thanks a million! I've reached out to their community. It appears that they are actively looking for people to give this a whirl so hopefully they will provide some feedback sooner rather than later. I am including a link to that post here should anyone on this team be interested in following along.
https://forum.confluent.io/t/kraft-apache-kafka-without-zookeeper/2935/7
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.
Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.
Refer 3.5.0 release notes on SCRAM with Kraft controllers
https://downloads.apache.org/kafka/3.5.0/RELEASE_NOTES.html