ClickHouse java client - reusing same instance of ClickHouseClient gives execution timeout after 10 inserts
I am new using clickhouse and I have assembled a project in java using java client.
<dependency>
<groupId>com.clickhouse</groupId>
<artifactId>clickhouse-http-client</artifactId>
<version>0.6.0</version>
</dependency>
<dependency>
<groupId>org.apache.httpcomponents.client5</groupId>
<artifactId>httpclient5</artifactId>
<version>5.2.1</version>
</dependency>
HashMap<String, String> options = new HashMap<>();
options.put("user", "user");
options.put("password", "pass");
String url = "my-clickhouse-host";
ClickHouseNode server = ClickHouseNode.of(url, options);
ClickHouseRequest<?> read = ClickHouseClient.builder()
.nodeSelector(ClickHouseNodeSelector.of(ClickHouseProtocol.HTTP))
.option(ClickHouseClientOption.SOCKET_KEEPALIVE, true)
.build()
.read(server).write();
for (int i = 0; i < 11; i++) {
String id = UUID.randomUUID().toString();
String field1 = "field1" + (i);
String field2 = "field2" + (i);
String field3 = "field3" + (i);
String field4 = "field4" + (i);
String query = String.format("INSERT INTO mytable VALUES ('%s', '%s', '%s', '%s', '%s')",
id,
field1,
field2,
field3,
field4);
read.format(ClickHouseFormat.CustomSeparated)
.query(query)
.executeAndWait();
}
This code only inserts 10 records, after that I get an Code: 159. Execution timed out Exception
I have already tried to change and add some ClickHouseClientOptions but I always get the same behaviour. How can I reuse the same client connections in clickhouse using java client?
read.format(ClickHouseFormat.CustomSeparated)
Why do you use .format? and why do use CustomSeparated ?
@den-crane That was an experience, But I get the exact same behaviour, with or without that
#1538 same problem?
@occunha thank you for reporting the issue. I suspect it happens because some internal timer is not stopped and continues because you are reusing the request object. I will try to replicate the issue.
Thank @shilaidun! It is a good idea to check it. Will do.
Good day, The root cause of the problem is that response object is not closed before sending a new request. I've reproduced the problem and here is my fixed version:
public void write() {
try (ClickHouseClient client = getClient()) {
ClickHouseRequest<?> read = client.read(getServer()).write();
for (int i = 0; i < 20; i++) {
System.out.println("Writing data. Iteration #" + i + " of 20.");
String id = UUID.randomUUID().toString();
String field1 = "field1" + (i);
String field2 = "field2" + (i);
String field3 = "field3" + (i);
String field4 = "field4" + (i);
String query = String.format("INSERT INTO mytable VALUES ('%s', '%s', '%s', '%s', '%s')",
id,
field1,
field2,
field3,
field4);
try (ClickHouseResponse resp = read.format(ClickHouseFormat.CustomSeparated)
.query(query)
.executeAndWait()) {
log.info("Response: {}", resp.getSummary());
} catch (Exception e) {
log.error("Failed to write data", e);
throw new RuntimeException(e);
}
}
} catch (Exception e) {
log.error("Failed to write data", e);
throw new RuntimeException(e);
}
}
The class ClickHouseResponse is Closable and can be used in try block:
try (ClickHouseResponse resp = read.format(ClickHouseFormat.CustomSeparated)
.query(query)
.executeAndWait()) {
log.info("Response: {}", resp.getSummary());
} catch (Exception e) {
log.error("Failed to write data", e);
throw new RuntimeException(e);
}
So object resp will be closed by existing from the block.
I see the problem is that query do not expect reading data from server and it would be good to make the client close response in such cases.
@chernser hasn't it been fixed for v2?
@mshustov It is a application error. We may be can improve it by wrapping future and closing connection on timeout.
relates https://github.com/ClickHouse/clickhouse-java/issues/1619