fetchSize is not effective
Steps to reproduce the behavior (Required)
fetchSize is not effective When submitting a large query, the program will crash
Expected behavior (Required)
Real behavior (Required)
StarRocks version (Required)
- 3.3.8
这个问题没有计划支持是吗? 是不是只能等Arrow Flight接口来解决了?
cursor doesn't implement on server side, so only way to prevent OOM on client at current moment is use streaming mode for jdbc
we tested it and it's works without OOM on client, but maybe not so fast as you expected. other options is a flight sql protocol
from mysql documentation
By default, ResultSets are completely retrieved and stored in memory. In most cases this is the most efficient way to operate, and due to the design of the MySQL network protocol is easier to implement. If you are working with ResultSets that have a large number of rows or large values, and can not allocate heap space in your JVM for the memory required, you can tell the driver to stream the results back one row at a time.
To enable this functionality, you need to create a Statement instance in the following manner:
stmt = conn.createStatement(java.sql.ResultSet.TYPE_FORWARD_ONLY,
java.sql.ResultSet.CONCUR_READ_ONLY);
stmt.setFetchSize(Integer.MIN_VALUE);
The combination of a forward-only, read-only result set, with a fetch size of Integer.MIN_VALUE serves as a signal to the driver to stream result sets row-by-row. After this any result sets created with the statement will be retrieved row-by-row.
The same behavior, got that difference comparing the JDBC Mysql client with Mysql Server Docker Instance vs Starrocks.
Starrocks only work on streaming with large datasets, that is fine but slowly compared to fetching by fixed size, though no any OOM Just need to adjust timeout on server for long extraction queries