Jordan Moore
Jordan Moore
One alternative, [as shown by](https://rmoff.net/2018/12/15/docker-tips-and-tricks-with-ksql-and-kafka/) @rmoff In the compose, override the container command ```yaml volumes: - $PWD/scripts:/scripts # TODO: Create this folder ahead of time, on your host command: -...
Thanks, @Matesanz That looks like the same I already posted https://github.com/confluentinc/cp-docker-images/issues/467#issuecomment-461104319
You still need to modify the container command execution to run different files, though This issue was opened to not need that
It's not possible to use standalone because the connectors wouldn't be persistent. The Connect container doesn't need to change; it already runs connect distributed script My suggestion was to use...
> official tutorials on this topic use an external file I don't see where that script is used in the compose file
Ah. My bad, was looking for a Connect container rather than ksqlDB
The Connect containers are ephemeral. Connect in standalone mode isn't configured with the three internal topics to store configs, statuses or (source) offsets. https://docs.confluent.io/platform/current/connect/concepts.html
It has been working fine for me (and others)... CONNECT_PLUGIN_PATH: /usr/share/java,/etc/kafka-connect/uber/,/etc/kafka-connect/plugins My host directory structure - https://github.com/cricket007/kafka-connect-sandbox/tree/master/kafka-connect
Hello. This question seems more appropriate for Stackoverflow or a forum. Not an issue for the repo In particular, you'd measure byte ingress and egress rates in both environments
Any liveness probe should be for the worker, via the Connect REST API. Checking custom connector http status codes will still be 200 if they fail