cp-docker-images icon indicating copy to clipboard operation
cp-docker-images copied to clipboard

Dockerized Schema Registry with SASL Authentication

Open MrDHat opened this issue 6 years ago • 7 comments

I am trying to setup Confluent Schema Registry with Kerberos backed Kafka Cluster. Here is what my docker compose file looks like:

---
version: '2'
services:
  zookeeper:
    image: confluentinc/cp-zookeeper:3.3.0
    container_name: gob-zk-sasl-1
    network_mode: host
    volumes:
      - ./dev-setup:/etc/kafka/secrets
    ports:
      - "22181:22181"
    environment:
      ZOOKEEPER_SERVER_ID: 1
      ZOOKEEPER_CLIENT_PORT: 22181
      ZOOKEEPER_TICK_TIME: 2000
      ZOOKEEPER_INIT_LIMIT: 5
      ZOOKEEPER_SYNC_LIMIT: 2
      KAFKA_OPTS: -Djava.security.auth.login.config=/etc/kafka/secrets/zookeeper_1_jaas.conf
          -Djava.security.krb5.conf=/etc/kafka/secrets/krb.conf
          -Dzookeeper.authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
          -Dsun.security.krb5.debug=true
    extra_hosts:
      - "moby:127.0.0.1"
  kafka:
    image: confluentinc/cp-kafka:3.3.0
    container_name: gob-kafka-sasl-1
    volumes:
      - ./dev-setup:/etc/kafka/secrets
    depends_on:
      - zookeeper
    network_mode: host
    ports:
      - "29094:29094"
    environment:
      KAFKA_ZOOKEEPER_CONNECT: localhost:22181
      KAFKA_ADVERTISED_LISTENERS: SASL_SSL://localhost:29094
      KAFKA_SSL_KEYSTORE_FILENAME: kafka.broker1.keystore.jks
      KAFKA_SSL_KEYSTORE_CREDENTIALS: broker1_keystore_creds
      KAFKA_SSL_KEY_CREDENTIALS: broker1_sslkey_creds
      KAFKA_SSL_TRUSTSTORE_FILENAME: kafka.broker1.truststore.jks
      KAFKA_SSL_TRUSTSTORE_CREDENTIALS: broker1_truststore_creds
      KAFKA_SECURITY_INTER_BROKER_PROTOCOL: SASL_SSL
      KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: GSSAPI
      KAFKA_SASL_ENABLED_MECHANISMS: GSSAPI
      KAFKA_SASL_KERBEROS_SERVICE_NAME: kafka
      KAFKA_OPTS: -Djava.security.auth.login.config=/etc/kafka/secrets/broker1_jaas.conf
          -Djava.security.krb5.conf=/etc/kafka/secrets/krb.conf
          -Dsun.security.krb5.debug=true
    extra_hosts:
      - "moby:127.0.0.1"
  schema-registry:
    image: confluentinc/cp-schema-registry:3.3.0
    container_name: gob-schema-registry
    restart: on-failure:3
    volumes:
      - ./dev-setup:/etc/kafka/secrets
    depends_on:
      - zookeeper
      - kafka
    network_mode: host
    ports:
      - "8081:8081"
    environment:
      SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: localhost:22181
      SCHEMA_REGISTRY_HOST_NAME: localhost
      SCHEMA_REGISTRY_LISTENERS: http://localhost:8081
      KAFKASTORE_BOOTSTRAP_SERVERS: SASL_SSL://localhost:29094
      KAFKASTORE_SASL_KERBEROS_SERVICE_NAME: kafka
      ZOOKEEPER_SET_ACL: "true"
      KAFKASTORE_SSL_TRUSTSTORE_LOCATION: /etc/kafka/secrets/kafka.broker1.truststore.jks
      KAFKASTORE_SSL_TRUSTSTORE_CREDENTIALS: /etc/kafka/secrets/broker1_truststore_creds
      KAFKA_OPTS: -Djava.security.auth.login.config=/etc/kafka/secrets/broker1_jaas.conf
          -Djava.security.krb5.conf=/etc/krb5.conf
          -Dsun.security.krb5.debug=true
    extra_hosts:
      - "moby:127.0.0.1"

I get this in my container logs:

gob-schema-registry | ===> ENV Variables ...
gob-schema-registry |
gob-schema-registry | . /etc/confluent/docker/apply-mesos-overrides
gob-schema-registry | + . /etc/confluent/docker/apply-mesos-overrides
gob-schema-registry | #!/usr/bin/env bash
gob-schema-registry | #
gob-schema-registry | # Copyright 2016 Confluent Inc.
gob-schema-registry | #
gob-schema-registry | # Licensed under the Apache License, Version 2.0 (the "License");
gob-schema-registry | # you may not use this file except in compliance with the License.
gob-schema-registry | # You may obtain a copy of the License at
gob-schema-registry | #
gob-schema-registry | # http://www.apache.org/licenses/LICENSE-2.0
gob-schema-registry | #
gob-schema-registry | # Unless required by applicable law or agreed to in writing, software
gob-schema-registry | # distributed under the License is distributed on an "AS IS" BASIS,
gob-schema-registry | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
gob-schema-registry | # See the License for the specific language governing permissions and
gob-schema-registry | # limitations under the License.
gob-schema-registry |
gob-schema-registry | # Mesos DC/OS docker deployments will have HOST and PORT0
gob-schema-registry | # set for the proxying of the service.
gob-schema-registry | #
gob-schema-registry | # Use those values provide things we know we'll need.
gob-schema-registry |
gob-schema-registry | [ -n "${HOST:-}" ] && [ -z "${SCHEMA_REGISTRY_HOST_NAME:-}" ] && \
gob-schema-registry | 	export SCHEMA_REGISTRY_HOST_NAME=$HOST || true # we don't want the setup to fail if not on Mesos
gob-schema-registry | ++ '[' -n '' ']'
gob-schema-registry | ++ true
gob-schema-registry |
gob-schema-registry |
gob-schema-registry | echo "===> ENV Variables ..."
gob-schema-registry | + echo '===> ENV Variables ...'
gob-schema-registry | env | sort
gob-schema-registry | + env
gob-schema-registry | + sort
gob-schema-registry | ALLOW_UNSIGNED=false
gob-schema-registry | COMPONENT=schema-registry
gob-schema-registry | CONFLUENT_DEB_VERSION=1
gob-schema-registry | CONFLUENT_MAJOR_VERSION=3
gob-schema-registry | CONFLUENT_MINOR_VERSION=3
gob-schema-registry | CONFLUENT_MVN_LABEL=
gob-schema-registry | CONFLUENT_PATCH_VERSION=0
gob-schema-registry | CONFLUENT_PLATFORM_LABEL=
gob-schema-registry | CONFLUENT_VERSION=3.3.0
gob-schema-registry | HOME=/root
gob-schema-registry | HOSTNAME=moby
gob-schema-registry | KAFKASTORE_BOOTSTRAP_SERVERS=SASL_SSL://localhost:29094
gob-schema-registry | KAFKASTORE_SASL_KERBEROS_SERVICE_NAME=kafka
gob-schema-registry | KAFKASTORE_SSL_TRUSTSTORE_CREDENTIALS=/etc/kafka/secrets/broker1_truststore_creds
gob-schema-registry | KAFKASTORE_SSL_TRUSTSTORE_LOCATION=/etc/kafka/secrets/kafka.broker1.truststore.jks
gob-schema-registry | KAFKA_OPTS=-Djava.security.auth.login.config=/etc/kafka/secrets/broker1_jaas.conf -Djava.security.krb5.conf=/etc/krb5.conf -Dsun.security.krb5.debug=true
gob-schema-registry | KAFKA_VERSION=0.11.0.0
gob-schema-registry | LANG=C.UTF-8
gob-schema-registry | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
gob-schema-registry | PWD=/
gob-schema-registry | PYTHON_PIP_VERSION=8.1.2
gob-schema-registry | PYTHON_VERSION=2.7.9-1
gob-schema-registry | SCALA_VERSION=2.11
gob-schema-registry | SCHEMA_REGISTRY_HOST_NAME=localhost
gob-schema-registry | SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL=localhost:22181
gob-schema-registry | SCHEMA_REGISTRY_LISTENERS=http://localhost:8081
gob-schema-registry | SHLVL=1
gob-schema-registry | ZOOKEEPER_SET_ACL=true
gob-schema-registry | ZULU_OPENJDK_VERSION=8=8.17.0.3
gob-schema-registry | _=/usr/bin/env
gob-schema-registry | affinity:container==73bad4b40944a2346647515d5dd7772eb2144f2827aed77193852cbe8654fa2f
gob-schema-registry | no_proxy=*.local, 169.254/16
gob-schema-registry | ===> User
gob-schema-registry |
gob-schema-registry | echo "===> User"
gob-schema-registry | + echo '===> User'
gob-schema-registry | id
gob-schema-registry | + id
gob-schema-registry | uid=0(root) gid=0(root) groups=0(root)
gob-schema-registry | ===> Configuring ...
gob-schema-registry |
gob-schema-registry | echo "===> Configuring ..."
gob-schema-registry | + echo '===> Configuring ...'
gob-schema-registry | /etc/confluent/docker/configure
gob-schema-registry | + /etc/confluent/docker/configure
gob-schema-registry |
gob-schema-registry | dub ensure SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL
gob-schema-registry | + dub ensure SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL
gob-schema-registry | dub ensure SCHEMA_REGISTRY_HOST_NAME
gob-schema-registry | + dub ensure SCHEMA_REGISTRY_HOST_NAME
gob-schema-registry | dub path /etc/"${COMPONENT}"/ writable
gob-schema-registry | + dub path /etc/schema-registry/ writable
gob-schema-registry |
gob-schema-registry | if [[ -n "${SCHEMA_REGISTRY_PORT-}" ]]
gob-schema-registry | then
gob-schema-registry |   echo "PORT is deprecated. Please use SCHEMA_REGISTRY_LISTENERS instead."
gob-schema-registry |   exit 1
gob-schema-registry | fi
gob-schema-registry | + [[ -n '' ]]
gob-schema-registry |
gob-schema-registry | if [[ -n "${SCHEMA_REGISTRY_JMX_OPTS-}" ]]
gob-schema-registry | then
gob-schema-registry |   if [[ ! $SCHEMA_REGISTRY_JMX_OPTS == *"com.sun.management.jmxremote.rmi.port"*  ]]
gob-schema-registry |   then
gob-schema-registry |     echo "SCHEMA_REGISTRY_OPTS should contain 'com.sun.management.jmxremote.rmi.port' property. It is required for accessing the JMX metrics externally."
gob-schema-registry |   fi
gob-schema-registry | fi
gob-schema-registry | + [[ -n '' ]]
gob-schema-registry |
gob-schema-registry | dub template "/etc/confluent/docker/${COMPONENT}.properties.template" "/etc/${COMPONENT}/${COMPONENT}.properties"
gob-schema-registry | + dub template /etc/confluent/docker/schema-registry.properties.template /etc/schema-registry/schema-registry.properties
gob-schema-registry | dub template "/etc/confluent/docker/log4j.properties.template" "/etc/${COMPONENT}/log4j.properties"
gob-schema-registry | + dub template /etc/confluent/docker/log4j.properties.template /etc/schema-registry/log4j.properties
gob-schema-registry | dub template "/etc/confluent/docker/admin.properties.template" "/etc/${COMPONENT}/admin.properties"
gob-schema-registry | + dub template /etc/confluent/docker/admin.properties.template /etc/schema-registry/admin.properties
gob-schema-registry |
gob-schema-registry | echo "===> Running preflight checks ... "
gob-schema-registry | + echo '===> Running preflight checks ... '
gob-schema-registry | /etc/confluent/docker/ensure
gob-schema-registry | ===> Running preflight checks ...
gob-schema-registry | + /etc/confluent/docker/ensure
gob-schema-registry |
gob-schema-registry | echo "===> Check if Zookeeper is healthy ..."
gob-schema-registry | + echo '===> Check if Zookeeper is healthy ...'
gob-schema-registry | cub zk-ready "$SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL" "${SCHEMA_REGISTRY_CUB_ZK_TIMEOUT:-40}"
gob-schema-registry | ===> Check if Zookeeper is healthy ...
gob-schema-registry | + cub zk-ready localhost:22181 40
gob-schema-registry | SASL is enabled. java.security.auth.login.config=/etc/kafka/secrets/broker1_jaas.conf
gob-schema-registry | Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
gob-schema-registry | Client environment:host.name=localhost
gob-schema-registry | Client environment:java.version=1.8.0_102
gob-schema-registry | Client environment:java.vendor=Azul Systems, Inc.
gob-schema-registry | Client environment:java.home=/usr/lib/jvm/zulu-8-amd64/jre
gob-schema-registry | Client environment:java.class.path=/etc/confluent/docker/docker-utils.jar
gob-schema-registry | Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
gob-schema-registry | Client environment:java.io.tmpdir=/tmp
gob-schema-registry | Client environment:java.compiler=<NA>
gob-schema-registry | Client environment:os.name=Linux
gob-schema-registry | Client environment:os.arch=amd64
gob-schema-registry | Client environment:os.version=4.9.41-moby
gob-schema-registry | Client environment:user.name=root
gob-schema-registry | Client environment:user.home=/root
gob-schema-registry | Client environment:user.dir=/
gob-schema-registry | Initiating client connection, connectString=localhost:22181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@45ff54e6
gob-schema-registry | >>> KeyTabInputStream, readName(): TEST.GETSTRIKE.CO
gob-schema-registry | >>> KeyTabInputStream, readName(): zkclient
gob-schema-registry | >>> KeyTabInputStream, readName(): localhost
gob-schema-registry | >>> KeyTab: load() entry length: 71; type: 23
gob-schema-registry | >>> KeyTabInputStream, readName(): TEST.GETSTRIKE.CO
gob-schema-registry | >>> KeyTabInputStream, readName(): zkclient
gob-schema-registry | >>> KeyTabInputStream, readName(): localhost
gob-schema-registry | >>> KeyTab: load() entry length: 79; type: 16
gob-schema-registry | >>> KeyTabInputStream, readName(): TEST.GETSTRIKE.CO
gob-schema-registry | >>> KeyTabInputStream, readName(): zkclient
gob-schema-registry | >>> KeyTabInputStream, readName(): localhost
gob-schema-registry | >>> KeyTab: load() entry length: 63; type: 1
gob-schema-registry | Looking for keys for: zkclient/[email protected]
gob-schema-registry | Java config name: /etc/krb5.conf
gob-schema-registry | Found unsupported keytype (1) for zkclient/[email protected]
gob-schema-registry | Added key: 16version: 1
gob-schema-registry | Added key: 23version: 1
gob-schema-registry | >>> KdcAccessibility: reset
gob-schema-registry | Looking for keys for: zkclient/[email protected]
gob-schema-registry | Found unsupported keytype (1) for zkclient/[email protected]
gob-schema-registry | Added key: 16version: 1
gob-schema-registry | Added key: 23version: 1
gob-schema-registry | Using builtin default etypes for default_tkt_enctypes
gob-schema-registry | default etypes for default_tkt_enctypes: 17 16 23.
gob-schema-registry | >>> KrbAsReq creating message
gob-schema-registry | getKDCFromDNS using UDP
gob-schema-registry | getKDCFromDNS using TCP
gob-schema-registry | SASL configuration failed: javax.security.auth.login.LoginException: Cannot locate KDC Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it.
gob-schema-registry | Opening socket connection to server localhost/127.0.0.1:22181
gob-schema-registry | Error occurred while connecting to Zookeeper server[localhost:22181]. Authentication failed.
gob-schema-registry | Socket connection established to localhost/127.0.0.1:22181, initiating session
gob-schema-registry | Session establishment complete on server localhost/127.0.0.1:22181, sessionid = 0x15e71077792000a, negotiated timeout = 40000
gob-schema-registry | Session: 0x15e71077792000a closed
gob-schema-registry | EventThread shut down
gob-schema-registry exited with code 1

This is what the jaas conf looks like for the schema registry:

// Kafka Client authentication
KafkaClient {
  com.sun.security.auth.module.Krb5LoginModule required
  useKeyTab=true
  storeKey=true
  keyTab="/etc/kafka/secrets/schemaregistry.keytab"
  principal="schemaregistry/[email protected]";
};

// Zookeeper client authentication
Client {
  com.sun.security.auth.module.Krb5LoginModule required
  useKeyTab=true
  storeKey=true
  keyTab="/etc/kafka/secrets/zkclient1.keytab"
  principal="zkclient/[email protected]";
};

Looks like the schema registry is unable to locate KDC. Not sure what is wrong here. I am guessing I am missing something out in the docker compose file. Can someone help me out with this?

P.S. I am using an existing kerberos server.

MrDHat avatar Sep 13 '17 11:09 MrDHat

cc @confluentinc/clients

ewencp avatar Sep 13 '17 18:09 ewencp

Looks like the Kerberos client is not able to connect to the KDC. Did you add your realm and kdc to /etc/krb5.conf? Can you reach the kdc from within the docker image? (e.g. telnet theKdcHost 88)

edenhill avatar Sep 14 '17 19:09 edenhill

I am unable to test if the KDC is reachable since the container never starts up. I think we can assume that the server is reachable since other containers (kafka and zookeeper) can reach the KDC server and they are in the same network as the schema registry.

Here is my krb5.conf

[logging]
default = FILE:/var/log/kerberos/krb5libs.log
kdc = FILE:/var/log/kerberos/krb5kdc.log
admin_server = FILE:/var/log/kerberos/kadmind.log

[libdefaults]
default_realm = TEST.GETSTRIKE.CO
dns_lookup_realm = false
dns_lookup_kdc = false
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true

[realms]
TEST.GETSTRIKE.CO = {
  kdc = kerberos-gob-dev.getstrike.co
  admin_server = kerberos-gob-dev.getstrike.co
}

[domain_realm]
.TEST.GETSTRIKE.CO = TEST.GETSTRIKE.CO
TEST.GETSTRIKE.CO = TEST.GETSTRIKE.CO

Is there a way to get more detailed logs in the schema registry?

MrDHat avatar Sep 15 '17 09:09 MrDHat

I could be missing something here, but I see this where the ZK connectivity failed because of bad SASL key. Irrespective, you could get more logs by setting SCHEMA_REGISTRY_LOG4J_ROOT_LOGLEVEL to DEBUG

gob-schema-registry | Found unsupported keytype (1) for zkclient/[email protected] gob-schema-registry | Added key: 16version: 1 gob-schema-registry | Added key: 23version: 1 gob-schema-registry | Using builtin default etypes for default_tkt_enctypes gob-schema-registry | default etypes for default_tkt_enctypes: 17 16 23. gob-schema-registry | >>> KrbAsReq creating message gob-schema-registry | getKDCFromDNS using UDP gob-schema-registry | getKDCFromDNS using TCP gob-schema-registry | SASL configuration failed: javax.security.auth.login.LoginException: Cannot locate KDC Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. gob-schema-registry | Opening socket connection to server localhost/127.0.0.1:22181 gob-schema-registry | Error occurred while connecting to Zookeeper server[localhost:22181].

mageshn avatar Sep 15 '17 16:09 mageshn

Setting SCHEMA_REGISTRY_LOG4J_ROOT_LOGLEVEL to DEBUG does nothing. I think I messed up the environment variables in docker compose. Can someone confirm if they are correct?

MrDHat avatar Sep 16 '17 09:09 MrDHat

This seems to be network problem. Try to start the registry server with jvm debug mode (JAVA_OPTS="-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=9998") and with suspend to yes then you can login into the docker image and try to telnet to the port of the kdc.

jomach avatar Oct 23 '17 08:10 jomach

After 3 years, when I setting up a kafka with kerberos, this problem appears again!

docker run -itd --name kafka \
  -h kafka \
  -v /opt/kafka/data:/var/lib/kafka/data/:rw \
  -v /opt/kafka/kafka.properties.template:/etc/confluent/docker/kafka.properties.template \
  -v /opt/kafka/kafka-server.keytab:/var/lib/kafka/kafka-server.keytab \
  -v /opt/kafka/zookeeper-client.keytab:/var/lib/kafka/zookeeper-client.keytab \
  -v /opt/kafka/krb5.conf:/var/lib/kafka/krb5.conf \
  -v /opt/kafka/jaas.conf:/var/lib/kafka/jaas.conf \
  -v /etc/timezone:/etc/timezone:ro \
  --add-host kdc1.ops.com:172.17.0.1 \
  --add-host zk1:172.17.0.2 \
  --add-host zk2:172.17.0.3 \
  --add-host zk3:172.17.0.4 \
  --restart always \
  -e KAFKA_OPTS="-Dlogging.level=INFO -Djava.security.krb5.conf=/var/lib/zookeeper/krb5.conf -Djava.security.auth.login.config=/var/lib/kafka/jaas.conf -Dsun.security.krb5.debug=true" \
  -e KAFKA_ADVERTISED_LISTENERS="SASL_PLAINTEXT://192.168.43.103:9092" \
  -e REPLICATION=1 \
  -e KAFKA_ZOOKEEPER_CONNECT="zk1:21811;zk2:21812;zk3:21813" \
  -e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
  -e KAFKA_BROKER_ID=0 \
  -e SECURITY_INTER_BROKER_PROTOCOL='SASL_PLAINTEXT' \
  -e SASL_MECHANISM_INTER_BROKER_PROTOCOL='GSSAPI' \
  -e SASL_ENABLED_MECHANISMS='GSSAPI' \
  -p 9092:9092 \
  kafka:2.1.1cp1 bash

I started the kafka container with bash as CMD, and entered the kafka container, I can connect to the kdc:

nc -vz kdc1.ops.com 88
kdc1.ops.com [172.17.0.1] 88 (kerberos) open

BUT the the problem is the same:

KrbAsReq creating message
getKDCFromDNS using UDP
getKDCFromDNS using TCP
[main] ERROR io.confluent.admin.utils.ClusterStatus - Timed out waiting for connection to Zookeeper server [zk1:21811].
		[Krb5LoginModule] authentication failed 
Cannot locate KDC
[main-SendThread(zk1:21811)] WARN org.apache.zookeeper.ClientCnxn - SASL configuration failed: javax.security.auth.login.LoginException: Cannot locate KDC Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it.

wuyudian1 avatar Jun 16 '20 13:06 wuyudian1