kafka-connect-elasticsearch
kafka-connect-elasticsearch copied to clipboard
kafka-connect-elasticsearch error message Failed to execute the bulk request
I tried to write a lot of ElasticSearch data from kafka consumption via Kafka-connect-ElasticSearch, but in the middle of running, Kafka - connect - elasticsearch occasionally submitted to the wrong WARN [elasticsearch - sink | task - 1] Failed to execute bulk request due to java.io.IOException: Connection reset by peer. Retrying attempt (2/6), and reached the highest number of times, and then the program was terminated, why is this, and my elasticsearch service pressure indicators are very normal, how to solve this problem
Also facing similar issue.
Detailed error message which I am facing :
WARN Failed to execute bulk request due to null. Retrying attempt (3/6) after backoff of 238 ms (io.confluent.connect.elasticsearch.RetryUtil)
connect_1 | [2023-05-17 11:12:38,659] ERROR Failed to execute bulk request due to 'org.elasticsearch.common.compress.NotXContentException: Compressor detection can only be called on some xcontent bytes or compressed xcontent bytes' after 6 attempt(s) (io.confluent.connect.elasticsearch.RetryUtil)
connect_1 | org.elasticsearch.common.compress.NotXContentException: Compressor detection can only be called on some xcontent bytes or compressed xcontent bytes
Any updates? Facing the same issue
Same issue here
[2024-04-03 14:49:54,396] INFO Skipping DLQ insertion for DataStream type. (io.confluent.connect.elasticsearch.ElasticsearchClient) [2024-04-03 14:50:00,833] WARN Failed to execute bulk request due to java.lang.NullPointerException. Retrying attempt (1/6) after backoff of 194 ms (io.confluent.connect.elasticsearch.RetryUtil) [2024-04-03 14:50:01,031] WARN INTERNAL version conflict for operation CREATE on document '1712145000-3b4d4c42-2fa9-4f93-9a92-9d56d3ce6412' version -1 in index '.ds-logs-kafka-invocation-audit-qa-2022.09.21-000001'. (io.confluent.connect.elasticsearch.ElasticsearchClient)
I am facing the same in this project, springboot-kafka-connect-jdbc-streams
A log from one of the sink-connector:
org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:618)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:336)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:237)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:206)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:202)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:257)
at org.apache.kafka.connect.runtime.isolation.Plugins.lambda$withClassLoader$1(Plugins.java:181)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.apache.kafka.connect.errors.ConnectException: Bulk request failed
at io.confluent.connect.elasticsearch.ElasticsearchClient$1.afterBulk(ElasticsearchClient.java:443)
at org.elasticsearch.action.bulk.BulkRequestHandler$1.onFailure(BulkRequestHandler.java:64)
at org.elasticsearch.action.ActionListener$Delegating.onFailure(ActionListener.java:66)
at org.elasticsearch.action.ActionListener$RunAfterActionListener.onFailure(ActionListener.java:350)
at org.elasticsearch.action.ActionListener$Delegating.onFailure(ActionListener.java:66)
at org.elasticsearch.action.bulk.Retry$RetryHandler.onFailure(Retry.java:123)
at io.confluent.connect.elasticsearch.ElasticsearchClient.lambda$null$1(ElasticsearchClient.java:216)
... 5 more
Caused by: org.apache.kafka.connect.errors.ConnectException: Failed to execute bulk request due to 'java.net.SocketTimeoutException: 3,000 milliseconds timeout on connection http-outgoing-37 [ACTIVE]' after 6 attempt(s)
at io.confluent.connect.elasticsearch.RetryUtil.callWithRetries(RetryUtil.java:165)
at io.confluent.connect.elasticsearch.RetryUtil.callWithRetries(RetryUtil.java:119)
at io.confluent.connect.elasticsearch.ElasticsearchClient.callWithRetries(ElasticsearchClient.java:490)
at io.confluent.connect.elasticsearch.ElasticsearchClient.lambda$null$1(ElasticsearchClient.java:210)
... 5 more
Caused by: java.net.SocketTimeoutException: 3,000 milliseconds timeout on connection http-outgoing-37 [ACTIVE]
at org.elasticsearch.client.RestClient.extractAndWrapCause(RestClient.java:903)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:299)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:287)
at org.elasticsearch.client.RestHighLevelClient.performClientRequest(RestHighLevelClient.java:2699)
at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:2171)
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:2137)
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:2105)
at org.elasticsearch.client.RestHighLevelClient.bulk(RestHighLevelClient.java:620)
at io.confluent.connect.elasticsearch.ElasticsearchClient.lambda$null$0(ElasticsearchClient.java:212)
at io.confluent.connect.elasticsearch.RetryUtil.callWithRetries(RetryUtil.java:158)
... 8 more
Caused by: java.net.SocketTimeoutException: 3,000 milliseconds timeout on connection http-outgoing-37 [ACTIVE]
at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.timeout(HttpAsyncRequestExecutor.java:387)
at org.apache.http.impl.nio.client.InternalIODispatch.onTimeout(InternalIODispatch.java:92)
at org.apache.http.impl.nio.client.InternalIODispatch.onTimeout(InternalIODispatch.java:39)
at org.apache.http.impl.nio.reactor.AbstractIODispatch.timeout(AbstractIODispatch.java:175)
at org.apache.http.impl.nio.reactor.BaseIOReactor.sessionTimedOut(BaseIOReactor.java:261)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.timeoutCheck(AbstractIOReactor.java:502)
at org.apache.http.impl.nio.reactor.BaseIOReactor.validate(BaseIOReactor.java:211)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:280)
at org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104)
at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:591)
... 1 more