cp-all-in-one
cp-all-in-one copied to clipboard
Control-Center not able to connect to Kafka-Connect cluster
Description
Using the cp-all-in-one-kraft
docker-compose file with confluentinc/cp-kafka-connect:7.1.1.amd64
and confluentinc/cp-enterprise-control-center:7.1.1.amd64
, kafka-connect
is showing errors in the logs as follows:
[2022-06-04 14:38:15,210] ERROR Uncaught exception in REST call to /v1/metadata/id (org.apache.kafka.connect.runtime.rest.errors.ConnectExceptionMapper)
javax.ws.rs.NotFoundException: HTTP 404 Not Found
at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:252)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244)
at org.glassfish.jersey.internal.Errors.process(Errors.java:292)
at org.glassfish.jersey.internal.Errors.process(Errors.java:274)
at org.glassfish.jersey.internal.Errors.process(Errors.java:244)
at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265)
at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:234)
at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:680)
at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:394)
at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:346)
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:366)
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:319)
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:205)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:799)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:550)
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1624)
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1434)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:501)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1594)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1349)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:234)
at org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:179)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
at org.eclipse.jetty.server.Server.handle(Server.java:516)
at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:400)
at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:645)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:392)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)
at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:338)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:315)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883)
at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034)
at java.base/java.lang.Thread.run(Thread.java:829)
It seems control-center
is sending those requests (i.e. /v1/metadata/id
) to kafka-connect
and as a result shows that no kafka-connect
clusters can be found.
Using confluentinc/cp-enterprise-control-center:6.2.0
does not exhibit this issue.
Environment
- GitHub branch: 7.1.1
- Operating System: Windows
- Version of Docker: 20.10.14, build a224086
- Version of Docker Compose: 1.29.2, build 5becea4c
I am also facing the same issue. In fact for me, control-center is not even starting up. Its not able to connect to brokers
compose file I am using is as below:
`--- version: '2' services: zookeeper: image: confluentinc/cp-zookeeper:7.1.0 hostname: zookeeper container_name: zookeeper ports: - "2181:2181" environment: ZOOKEEPER_CLIENT_PORT: 2181 ZOOKEEPER_TICK_TIME: 2000
broker: image: confluentinc/cp-server:7.1.0 hostname: broker container_name: broker depends_on: - zookeeper ports: - "9092:9092" - "9101:9101" environment: KAFKA_BROKER_ID: 1 KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181' KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092 KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0 KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1 KAFKA_CONFLUENT_BALANCER_TOPIC_REPLICATION_FACTOR: 1 KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1 KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1 KAFKA_JMX_PORT: 9101 KAFKA_JMX_HOSTNAME: localhost KAFKA_CONFLUENT_SCHEMA_REGISTRY_URL: http://schema-registry:8081 CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:29092 CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1 CONFLUENT_METRICS_ENABLE: 'true' CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
schema-registry: image: confluentinc/cp-schema-registry:7.1.0 hostname: schema-registry container_name: schema-registry depends_on: - broker ports: - "8081:8081" environment: SCHEMA_REGISTRY_HOST_NAME: schema-registry SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: 'broker:29092' SCHEMA_REGISTRY_LISTENERS: http://0.0.0.0:8081
connect: image: cnfldemos/cp-server-connect-datagen:0.5.3-7.1.0 hostname: connect container_name: connect depends_on: - broker - schema-registry ports: - "8083:8083" environment: CONNECT_BOOTSTRAP_SERVERS: 'broker:29092' CONNECT_REST_ADVERTISED_HOST_NAME: connect CONNECT_GROUP_ID: compose-connect-group CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1 CONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000 CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1 CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1 CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://schema-registry:8081 # CLASSPATH required due to CC-2422 CLASSPATH: /usr/share/java/monitoring-interceptors/monitoring-interceptors-7.1.0.jar CONNECT_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor" CONNECT_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor" CONNECT_PLUGIN_PATH: "/usr/share/java,/usr/share/confluent-hub-components" CONNECT_LOG4J_LOGGERS: org.apache.zookeeper=ERROR,org.I0Itec.zkclient=ERROR,org.reflections=ERROR
control-center: image: confluentinc/cp-enterprise-control-center:7.1.0 hostname: control-center container_name: control-center depends_on: - broker - schema-registry - connect - ksqldb-server ports: - "9021:9021" environment: CONTROL_CENTER_BOOTSTRAP_SERVERS: 'broker:29092' CONTROL_CENTER_CONNECT_CONNECT-DEFAULT_CLUSTER: 'connect:8083' CONTROL_CENTER_KSQL_KSQLDB1_URL: "http://ksqldb-server:8088" CONTROL_CENTER_KSQL_KSQLDB1_ADVERTISED_URL: "http://localhost:8088" CONTROL_CENTER_SCHEMA_REGISTRY_URL: "http://schema-registry:8081" CONTROL_CENTER_REPLICATION_FACTOR: 1 CONTROL_CENTER_INTERNAL_TOPICS_PARTITIONS: 1 CONTROL_CENTER_MONITORING_INTERCEPTOR_TOPIC_PARTITIONS: 1 CONFLUENT_METRICS_TOPIC_REPLICATION: 1 PORT: 9021
ksqldb-server: image: confluentinc/cp-ksqldb-server:7.1.0 hostname: ksqldb-server container_name: ksqldb-server depends_on: - broker - connect ports: - "8088:8088" environment: KSQL_CONFIG_DIR: "/etc/ksql" KSQL_BOOTSTRAP_SERVERS: "broker:29092" KSQL_HOST_NAME: ksqldb-server KSQL_LISTENERS: "http://0.0.0.0:8088" KSQL_CACHE_MAX_BYTES_BUFFERING: 0 KSQL_KSQL_SCHEMA_REGISTRY_URL: "http://schema-registry:8081" KSQL_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor" KSQL_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor" KSQL_KSQL_CONNECT_URL: "http://connect:8083" KSQL_KSQL_LOGGING_PROCESSING_TOPIC_REPLICATION_FACTOR: 1 KSQL_KSQL_LOGGING_PROCESSING_TOPIC_AUTO_CREATE: 'true' KSQL_KSQL_LOGGING_PROCESSING_STREAM_AUTO_CREATE: 'true'
ksqldb-cli: image: confluentinc/cp-ksqldb-cli:7.1.0 container_name: ksqldb-cli depends_on: - broker - connect - ksqldb-server entrypoint: /bin/sh tty: true
ksql-datagen:
image: confluentinc/ksqldb-examples:7.1.0
hostname: ksql-datagen
container_name: ksql-datagen
depends_on:
- ksqldb-server
- broker
- schema-registry
- connect
command: "bash -c 'echo Waiting for Kafka to be ready... &&
cub kafka-ready -b broker:29092 1 40 &&
echo Waiting for Confluent Schema Registry to be ready... &&
cub sr-ready schema-registry 8081 40 &&
echo Waiting a few seconds for topic creation to finish... &&
sleep 11 &&
tail -f /dev/null'"
environment:
KSQL_CONFIG_DIR: "/etc/ksql"
STREAMS_BOOTSTRAP_SERVERS: broker:29092
STREAMS_SCHEMA_REGISTRY_HOST: schema-registry
STREAMS_SCHEMA_REGISTRY_PORT: 8081
rest-proxy: image: confluentinc/cp-kafka-rest:7.1.0 depends_on: - broker - schema-registry ports: - 8082:8082 hostname: rest-proxy container_name: rest-proxy environment: KAFKA_REST_HOST_NAME: rest-proxy KAFKA_REST_BOOTSTRAP_SERVERS: 'broker:29092' KAFKA_REST_LISTENERS: "http://0.0.0.0:8082" KAFKA_REST_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081' `
Machine Details: Mac m1 pro Docker version 20.10.14, build a224086 Docker Compose version v2.5.1
@vsinha1105 I managed to solve my control-center not even starting up issue by increasing the Docker VM memory specs from 2GB to 8GB.
Hi all. I am facing the same issue. Steps to reproduce:
- Download https://github.com/confluentinc/cp-all-in-one/blob/7.1.1-post/cp-all-in-one/docker-compose.yml
- Change
connect.image
toconfluentinc/cp-kafka-connect:7.1.1
. - Start up docker compose and check that:
3.1. UI shows "No Connect Clusters Found"
3.2.
curl http://127.0.0.1:8083/v1/metadata/id
returns{"error_code":404,"message":"HTTP 404 Not Found"}
. The same request returns{"id":"","scope":{"path":[],"clusters":{"kafka-cluster":"***","connect-cluster":"compose-connect-group"}}}
for the defaultcnfldemos/cp-server-connect-datagen:0.5.3-7.1.0
image.
This is what worked for me in my M1 Chip:
- zookeeper image: confluentinc/cp-zookeeper:latest.arm64
- broker image: confluentinc/cp-server:latest.arm64
- schema-registry image: confluentinc/cp-schema-registry:latest.arm64
- connect image: cnfldemos/cp-server-connect-datagen:0.5.0-6.2.0 [I haven't tested with the latest cnfldemos/cp-server-connect-datagen:0.5.3-7.1.0]
- control-center image: confluentinc/cp-enterprise-control-center:latest.arm64
- ksqldb-server image: confluentinc/cp-ksqldb-server:latest.arm64
- ksqldb-cli image: confluentinc/cp-ksqldb-cli:latest.arm64
- ksql-datagen image: confluentinc/ksqldb-examples:latest.arm64
- rest-proxy image: confluentinc/cp-kafka-rest:latest.arm64
https://github.com/confluentinc/cp-all-in-one/blob/7.1.1-post/cp-all-in-one/docker-compose.yml
Same error here, using confluentinc/cp-kafka-connect:7.1.1 as connect image and confluentinc/cp-enterprise-control-center:7.1.1 for control-center.
What do folks here think about the proposal in https://github.com/confluentinc/cp-all-in-one/pull/99#issuecomment-1188491079 ?
The original issue reports using a Windows host, albeit using ARM images. Let's not mix it up with issues related to M1 Macs
Figured out finally. confluent.controlcenter.connect.healthcheck.endpoint
should be /connectors
in accordance with the documentation https://docs.confluent.io/platform/current/control-center/installation/configuration.html#general.
Figured out finally.
confluent.controlcenter.connect.healthcheck.endpoint
should be/connectors
in accordance with the documentation https://docs.confluent.io/platform/current/control-center/installation/configuration.html#general.
At docker-compose enviroment configuration, wich should be this property name? I tried CONTROL_CENTER_CONNECT_HEALTHCHECK_ENDPOINT and nothing change, still getting same error.
tried CONTROL_CENTER_CONNECT_HEALTHCHECK_ENDPOINT and nothing change
You need the whole variable name with confluent
CONTROL_CENTER_CONFLUENT_CONTROLCENTER_CONNECT_HEALTHCHECK_ENDPOINT
On edit: accidentally captured the wrong one
It's actually CONTROL_CENTER_CONNECT_HEALTHCHECK_ENDPOINT
. That ought to fix it, tried it on mine, and it links up well.
control-center:
image: confluentinc/cp-enterprise-control-center:7.2.1
hostname: control-center
container_name: control-center
depends_on:
- broker
- schema-registry
- connect
- ksqldb-server
ports:
- "9021:9021"
environment:
CONTROL_CENTER_BOOTSTRAP_SERVERS: 'broker:29092'
CONTROL_CENTER_CONNECT_CONNECT-DEFAULT_CLUSTER: 'http://connect:8083'
CONTROL_CENTER_KSQL_KSQLDB1_URL: "http://ksqldb-server:8088"
CONTROL_CENTER_KSQL_KSQLDB1_ADVERTISED_URL: "http://localhost:8088"
CONTROL_CENTER_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
CONTROL_CENTER_REPLICATION_FACTOR: 1
CONTROL_CENTER_INTERNAL_TOPICS_PARTITIONS: 1
CONTROL_CENTER_MONITORING_INTERCEPTOR_TOPIC_PARTITIONS: 1
CONTROL_CENTER_CONNECT_HEALTHCHECK_ENDPOINT: '/connectors'
CONFLUENT_METRICS_TOPIC_REPLICATION: 1
PORT: 9021
Here are the results:
The connect image to use with Control Center is confluentinc/cp-server-connect
rather than confluentinc/cp-kafka-connect
. confluentinc/cp-server-connect
is the base image for cnfldemos/cp-server-connect-datagen
in cp-all-in-one, and confluentinc/cp-kafka-connect
is the base image for cnfldemos/kafka-connect-datagen
in cp-all-in-one-community.
Given this, I don't see any change needed in the Docker compose file(s) but perhaps this should be clarified in documentation for people who don't want to use cnfldemos/cp-server-connect-datagen
? Could folks on this thread chime in as to why they made this change? Was it to get past OS support issues, or did anyone do it because they didn't want datagen packaged?
related: For a while, the confluentinc/cp-*
images supported arm64 while cnfldemos/cp-server-connect-datagen
did not, so were people making this change in order to get M1 support? Note that the latest cnfldemos/cp-server-connect-datagen:0.6.0-7.2.1
in 7.2.2-post does now support arm64
Closing this out since it seems n/a anymore - please comment or reopen if there are docs or other changes desired!
@ntx-ben @vsinha1105 working one https://github.com/Raghav2211/cp-all-in-one/commit/7ea5f21093872c4bc8f5ecdeb75c7e967c322f73