clickhouse-kafka-connect icon indicating copy to clipboard operation
clickhouse-kafka-connect copied to clipboard

Not Recoverable Exception when exceed Memory Limit on ClickHouse Server

Open oleg-savko opened this issue 1 year ago • 1 comments

Seems it should be recoverable exception, where temporary on ClickHouse Server Memory Limit Exceeded.

{
name: "...",
connector: {
state: "RUNNING",
worker_id: "connect:8083"
},
tasks: [
{
id: 0,
state: "FAILED",
worker_id: "connect:8083",
trace: "org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
	at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:628)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:340)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:238)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:207)
	at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:229)
	at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:284)
	at org.apache.kafka.connect.runtime.isolation.Plugins.lambda$withClassLoader$1(Plugins.java:181)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.lang.RuntimeException: Number of records: 4986
	at com.clickhouse.kafka.connect.util.Utils.handleException(Utils.java:116)
	at com.clickhouse.kafka.connect.sink.ClickHouseSinkTask.put(ClickHouseSinkTask.java:68)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:593)
	... 11 more
Caused by: java.lang.RuntimeException: Topic: [workitems_changes], Partition: [0], MinOffset: [31956392], MaxOffset: [31961289], (QueryId: [1b838698-c5bf-40ab-b6a9-90b59e1bc6ce])
	at com.clickhouse.kafka.connect.sink.processing.Processing.doInsert(Processing.java:63)
	at com.clickhouse.kafka.connect.sink.processing.Processing.doLogic(Processing.java:182)
	at com.clickhouse.kafka.connect.sink.ProxySinkTask.put(ProxySinkTask.java:93)
	at com.clickhouse.kafka.connect.sink.ClickHouseSinkTask.put(ClickHouseSinkTask.java:64)
	... 12 more
Caused by: java.util.concurrent.ExecutionException: com.clickhouse.client.ClickHouseException: Code: 241. DB::Exception: Memory limit (total) exceeded: would use 57.42 GiB (attempt to allocate chunk of 4230605 bytes), maximum: 56.61 GiB. OvercommitTracker decision: Memory overcommit isn't used. Waiting time or overcommit denominator are set to zero. (MEMORY_LIMIT_EXCEEDED) (version 23.8.4.69 (official build))
, server ClickHouseNode [uri=http://...]@-1553486099
	at java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
	at java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2005)
	at com.clickhouse.kafka.connect.sink.db.ClickHouseWriter.doInsertRawBinary(ClickHouseWriter.java:464)
	at com.clickhouse.kafka.connect.sink.db.ClickHouseWriter.doInsert(ClickHouseWriter.java:153)
	at com.clickhouse.kafka.connect.sink.processing.Processing.doInsert(Processing.java:61)
	... 15 more
Caused by: com.clickhouse.client.ClickHouseException: Code: 241. DB::Exception: Memory limit (total) exceeded: would use 57.42 GiB (attempt to allocate chunk of 4230605 bytes), maximum: 56.61 GiB. OvercommitTracker decision: Memory overcommit isn't used. Waiting time or overcommit denominator are set to zero. (MEMORY_LIMIT_EXCEEDED) (version 23.8.4.69 (official build))
, server ClickHouseNode [uri=http://...]@-1553486099
	at com.clickhouse.client.ClickHouseException.of(ClickHouseException.java:168)
	at com.clickhouse.client.AbstractClient.lambda$execute$0(AbstractClient.java:275)
	at java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1700)
	... 3 more
Caused by: java.io.IOException: Code: 241. DB::Exception: Memory limit (total) exceeded: would use 57.42 GiB (attempt to allocate chunk of 4230605 bytes), maximum: 56.61 GiB. OvercommitTracker decision: Memory overcommit isn't used. Waiting time or overcommit denominator are set to zero. (MEMORY_LIMIT_EXCEEDED) (version 23.8.4.69 (official build))

	at com.clickhouse.client.http.ApacheHttpConnectionImpl.checkResponse(ApacheHttpConnectionImpl.java:209)
	at com.clickhouse.client.http.ApacheHttpConnectionImpl.post(ApacheHttpConnectionImpl.java:243)
	at com.clickhouse.client.http.ClickHouseHttpClient.send(ClickHouseHttpClient.java:118)
	at com.clickhouse.client.AbstractClient.sendAsync(AbstractClient.java:161)
	at com.clickhouse.client.AbstractClient.lambda$execute$0(AbstractClient.java:273)
	... 4 more
"
}
],
type: "sink"
}

oleg-savko avatar Dec 19 '23 06:12 oleg-savko

Looking at the similar issue

db.table_mv (df168f62-.....). (MEMORY_LIMIT_EXCEEDED) (version 23.3.18.15 (official build)); 
, server ClickHouseNode [uri=http://...]@1607079269;
  ... 3 more; Caused by: java.io.IOException: Code: 241. DB::Exception: Memory limit (total) exceeded: would use 172.11 GiB (attempt to allocate chunk of 4498271 bytes), maximum: 170.01 GiB.: Insertion status:; 
Wrote 1 blocks and 2 rows on shard 0 replica 0, host3:9440 (average 20 ms per block); 
Wrote 1 blocks and 3 rows on shard 1 replica 0, host2:9440 (average 1 ms per block); 
Wrote 1 blocks and 3 rows on shard 2 replica 0, host1:9440 (average 25 ms per block); 
Wrote 1 blocks and 1 rows on shard 3 replica 0, host4:9440 (average 25 ms per block); 
: while pushing to view db.table_mv (df168f62-......).

igorvoltaic avatar Dec 21 '23 20:12 igorvoltaic