parseable
parseable copied to clipboard
Unable to push data via Redpanda (following the documentation)
Hi Parseable Team,
I'm evaluating Parseable in cooperation with Redpanda. And I'm following the demo setup on: https://www.parseable.com/docs/integrations/redpanda
Everything is up and running until I add to the topic: https://www.parseable.com/docs/integrations/redpanda#send-data-to-redpanda-topic
Then I receice the following error:
org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:611)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:333)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:234)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:203)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:189)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:244)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: org.apache.kafka.connect.errors.ConnectException: Sending failed and no retries remain, stopping
at io.aiven.kafka.connect.http.HttpSinkTask.sendEach(HttpSinkTask.java:107)
at io.aiven.kafka.connect.http.HttpSinkTask.put(HttpSinkTask.java:81)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:581)
... 10 more
I tried to switch the HTTP Sink Connector from version 0.6.0 to the newer one 0.7.0 (which is located in a "connections" subdirectory). With the same error.
This is my docker-compose.yml
version: "3"
volumes:
redpanda:
networks:
parseable:
services:
redpanda:
image: docker.redpanda.com/vectorized/redpanda:v22.3.11
command:
- redpanda start
- --smp 1
- --overprovisioned
- --kafka-addr PLAINTEXT://0.0.0.0:29092,OUTSIDE://0.0.0.0:9092
- --advertise-kafka-addr PLAINTEXT://redpanda:29092,OUTSIDE://localhost:9092
- --pandaproxy-addr 0.0.0.0:8082
- --advertise-pandaproxy-addr localhost:8082
ports:
- 8081:8081
- 8082:8082
- 9092:9092
- 9644:9644
- 29092:29092
volumes:
- redpanda:/var/lib/redpanda/data
networks:
- parseable
console:
image: docker.redpanda.com/vectorized/console:v2.1.1
entrypoint: /bin/sh
command: -c "echo \"$$CONSOLE_CONFIG_FILE\" > /tmp/config.yml; /app/console"
environment:
CONFIG_FILEPATH: /tmp/config.yml
CONSOLE_CONFIG_FILE: |
kafka:
brokers: ["redpanda:29092"]
schemaRegistry:
enabled: true
urls: ["http://redpanda:8081"]
redpanda:
adminApi:
enabled: true
urls: ["http://redpanda:9644"]
connect:
enabled: true
clusters:
- name: local-connect-cluster
url: http://connect:8083
ports:
- 8080:8080
networks:
- parseable
depends_on:
- redpanda
connect:
image: docker.redpanda.com/vectorized/connectors:1.0.0-dev-dff1c57
#platform: 'linux/amd64'
depends_on:
- redpanda
ports:
- "8083:8083"
environment:
CONNECT_CONFIGURATION: |
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
group.id=connectors-cluster
offset.storage.topic=_internal_connectors_offsets
config.storage.topic=_internal_connectors_configs
status.storage.topic=_internal_connectors_status
config.storage.replication.factor=-1
offset.storage.replication.factor=-1
status.storage.replication.factor=-1
offset.flush.interval.ms=1000
producer.linger.ms=50
producer.batch.size=131072
key.converter.schemas.enable=false
value.converter.schemas.enable=false
plugin.path=/opt/kafka/redpanda-plugins,/tmp
CONNECT_BOOTSTRAP_SERVERS: redpanda:29092
CONNECT_GC_LOG_ENABLED: "false"
CONNECT_HEAP_OPTS: -Xms512M -Xmx512M
CONNECT_LOG_LEVEL: info
networks:
- parseable
volumes:
- ./connectors/http-connector-for-apache-kafka-0.7.0:/opt/kafka/redpanda-plugins/http-connector-for-apache-kafka-0.7.0
parseable:
image: parseable/parseable:latest
command: ["parseable", "s3-store"]
restart: unless-stopped
ports:
- 8000:8000
volumes:
- /dockerData/parseable/staging:/staging
env_file:
- ./parseable.env
The only thing that is different to the "original" docker compose (from here: https://www.parseable.com/redpanda/docker-compose.yaml) is that I'm using s3-store instead of the local one.
Did I do something wrong?
Thanks for reporting @thinkORo let us take a look and get back to this.
@balaji-jr : Quick update: I rebuild this setup but based on two separate independant environments.
- Parseable
- Redpanda with Console and Connect
And now I can push data from redpanda to parseable without any issues. There must be any misconfiguration in the documented setup.
I'm using this docker-compose.yml (https://docs.redpanda.com/current/reference/docker-compose/#owl-shop-sample-application) but without the "owl-shop" service. And for the connector I added the http-connector-for-apache-kafka-0.7.0 (new version 0.7.0) from your example.
If you need any further details ...
Thanks @thinkORo . I guess the documentation needs to be updated based on newer releases. It is a bit dated. We'll check and update the docs. Thank you for the update.