cp-docker-images
cp-docker-images copied to clipboard
Configure Kafka Connect with sasl.mechanism=PLAIN/security.protocol=SASL_SSL
Hi,
We are trying to set-up Kafka Connect via the chart in our cluster. My current value file looks as follows:
cp-kafka-connect:
enabled: true
replicaCount: 1
fullNameOverride: kafka-connect
resources:
limits:
cpu: "1"
memory: 2Gi
requests:
cpu: "0.5"
memory: 1Gi
customEnv:
CUSTOM_SCRIPT_PATH: /tmp/scripts/configure-connectors.sh
CONNECT_BOOTSTRAP_SERVERS: #{ccloudKafkaBootstrapServer}#
CONNECT_SASL_JAAS_CONFIG: |
org.apache.kafka.common.security.plain.PlainLoginModule required \
username="#{ccloudConnectKafkaApiKey}#" \
password="#{ccloudConnectKafkaApiSecret}#";
CONNECT_SASL_MECHANISM: 'PLAIN'
CONNECT_SECURITY_PROTOCOL: 'SASL_SSL'
CONNECT_BASIC_AUTH_CREDENTIALS_SOURCE: USER_INFO
CONNECT_BASIC_AUTH_USER_INFO: #{ccloudConnectSrApiKey}#:#{ccloudConnectSrApiSecret}#
#in case the deprecated value is required
CONNECT_SCHEMA_REGISTRY_BASIC_AUTH_USER_INFO: #{ccloudConnectSrApiKey}#:#{ccloudConnectSrApiSecret}#
CONNECT_SCHEMA_REGISTRY_URL: #{ccloudSchemaRegistryUrl}#
CONNECT_REST_PORT: 8083
CONNECT_REST_ADVERTISED_HOST_NAME: localhost #Must be tested
CONNECT_GROUP_ID: kafka-connect
CONNECT_CONFIG_STORAGE_TOPIC: connect-configs
CONNECT_OFFSET_STORAGE_TOPIC: connect-offsets
CONNECT_STATUS_STORAGE_TOPIC: connect-status
CONNECT_KEY_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL: #{ccloudSchemaRegistryUrl}#
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: #{ccloudSchemaRegistryUrl}#
CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_LOG4J_ROOT_LOGLEVEL: "INFO"
CONNECT_LOG4J_LOGGERS: "org.apache.kafka.connect.runtime.rest=WARN,org.reflections=ERROR"
CONNECT_PLUGIN_PATH: /usr/share/java,/usr/share/confluent-hub-components
volumeMounts:
- name: configure-connectors-script
mountPath: /tmp/scripts
volumes:
- name: configure-connectors-script
configMap:
name: connectors-config
items:
- key: configure-connectors
path: configure-connectors.sh
Inside configure-connectors.sh
I have a config for 2 test connectors: datagen and azure-blob-storage-source.
The config is based on the demo.
As you can see I have already specified security options for consumer, producer, admin and worker. And yet I'm still not able to create the topics. The Admin is not configured properly it seems. I see several TimeoutExceptions in the logs:
$ kubectl logs confluent-platform-cp-kafka-connect-59b64446d6-9rcvj -c cp-kafka-connect-server | grep -i timeoutexception
org.apache.kafka.common.errors.TimeoutException: Call(callName=fetchMetadata, deadlineMs=1607519025420) timed out at 1607519025421 after 1 attempt(s)
Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
org.apache.kafka.common.errors.TimeoutException: Call(callName=fetchMetadata, deadlineMs=1607519055421) timed out at 1607519055422 after 1 attempt(s)
Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
org.apache.kafka.common.errors.TimeoutException: Call(callName=fetchMetadata, deadlineMs=1607519085422) timed out at 9223372036854775807 after 1 attempt(s)
Caused by: org.apache.kafka.common.errors.TimeoutException: The AdminClient thread has exited.
Caused by: org.apache.kafka.common.errors.TimeoutException: Call(callName=createTopics, deadlineMs=1607519055420) timed out at 1607519055421 after 1 attempt(s)
Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
There's also:
org.apache.kafka.connect.errors.ConnectException: Timed out while checking for or creating topic(s) '_confluent-command'.
Datagen also writes a lot of:
WARN [Producer clientId=connector-producer-datagen-users-0] Error while fetching metadata with correlation id 4045 : {navbi.connect.datagen-users=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
I hope I don't need yet another prefix for it to work.
What else is needed so that Kafka Connect can create the topics? Is there something obvious missing in my config?
I can see some suspicious values in AdminClientConfig logs:
[2020-12-09 13:03:15,417] INFO AdminClientConfig values:
bootstrap.servers = [xxxxxxx.westeurope.azure.confluent.cloud:9092]
client.dns.lookup = default
client.id = AzureBlobStorageSourceConnector-license-manager
....
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
...while I do not use GSSAPI. SO, it seems AdminClient is still misconfigured.
P.S. We have our production workloads runnin in the same cluster and with the same ACLs and they work fine.
@yuranos did u solve?