Batch Insert r2dbc
Looks like bulk inserts are inserting 1 by 1 and not in batches.
I have created the following table:
CREATE TABLE IF NOT EXISTS test
(
`a` Int16,
`b` Int16,
`created` DateTime
)
ENGINE = MergeTree()
PARTITION BY (toYYYYMM(created))
ORDER BY (created)
TTL created + INTERVAL 13 MONTH;
If I try to do the following:
public Mono<Void> saveAll(){
return Mono.from(connectionFactory.create())
.flatMapMany(conn -> execute(conn)).then();
}
private Publisher<? extends Result> execute(Connection conn) {
return conn.createStatement("insert into test values (:a, :b, :created)")
.bind("a",1)
.bind("b", 2)
.bind("created", LocalDateTime.now())
.add()
.bind("a",5)
.bind("b", 6)
.bind("created", LocalDateTime.now())
.execute();
}
When checking the logs, I can see that 2 connections are established and inserts are run independently so it is not acting as a batch insert.
2024-09-02T10:43:28.253Z DEBUG 49005 --- [ckHouseWorker-1] com.clickhouse.client.AbstractClient : Connection established: com.clickhouse.client.http.HttpUrlConnectionImpl@1f010075
2024-09-02T10:43:28.253Z DEBUG 49005 --- [ckHouseWorker-1] c.c.client.http.ClickHouseHttpClient : Query: insert into `default`.test values (1, 2, 1725273808)
2024-09-02T10:43:28.254Z DEBUG 49005 --- [ckHouseWorker-2] com.clickhouse.client.AbstractClient : Connection established: com.clickhouse.client.http.HttpUrlConnectionImpl@35e8f1b6
2024-09-02T10:43:28.254Z DEBUG 49005 --- [ckHouseWorker-2] c.c.client.http.ClickHouseHttpClient : Query: insert into `default`.test values (5, 6, 1725273808)
Am I doing something wrong or is this the intended behaviour?
I have also tried using this approach but same behaviour
return conn.createBatch()
.add("insert into test values (1, 2, '2024-01-17 00:00:00')")
.add("insert into test values (5, 6, '2024-01-17 00:00:00')")
.execute();
I am using clickhouse-r2dbc version 0.6.4.
Good day, @javiercj93! Thank you for reporting! We will look into it.
I am using clickhouse-r2dbc version 0.7.0 and see the issue too.
Looks to me as a critical issue since you can't really use r2dbc for batch inserts (the rows are inserted into different DB parts instead of a single part)
@eyal-mishor I agree - will try to look as soon as possible.
Observed this behavior in clickhouse-jdbc v0.7.x, while 0.6.x seems to work as expected