TDengine icon indicating copy to clipboard operation
TDengine copied to clipboard

taosadapter内存占用飙升,导致服务器内存耗尽,报java.sql.SQLException: TDengine ERROR (0x73a): Query memory exhausted

Open Trackr2014 opened this issue 7 months ago • 10 comments

1.使用3.3.6.3安装包安装部署单节点 2.使用官网高性能写入demo【https://docs.taosdata.com/develop/high/ 】,略微修改,代码如下,运行一个小时之后,内存由0占用飙升到20g,最终会报java.sql.SQLException: TDengine ERROR (0x73a): Query memory exhausted错误。 3.代码如下: import cn.hutool.core.lang.Assert; import com.taosdata.jdbc.TSDBDriver; import lombok.extern.slf4j.Slf4j;

import java.sql.*; import java.util.Properties; import java.util.Random;

@Slf4j public class WSHighVolumeDemo {

// modify host to your own
private static final String HOST = "10.60.136.32";
private static final int port = 6041;
private static final Random random = new Random(System.currentTimeMillis());
private static final int NUM_OF_SUB_TABLE = 10000;
private static final int NUM_OF_ROW = 1000;

public static void main(String[] args) throws SQLException {

    String url = "jdbc:TAOS-WS://" + HOST + ":" + port + "/?user=root&password=taosdata";
    Properties properties = new Properties();
    // Use an efficient writing mode
    properties.setProperty(TSDBDriver.PROPERTY_KEY_ASYNC_WRITE, "stmt");
    // The maximum number of rows to be batched in a single write request
    properties.setProperty(TSDBDriver.PROPERTY_KEY_BATCH_SIZE_BY_ROW, "10000");
    // The maximum number of rows to be cached in the queue (for each backend write
    // thread)
    properties.setProperty(TSDBDriver.PROPERTY_KEY_CACHE_SIZE_BY_ROW, "100000");
    // Number of backend write threads
    properties.setProperty(TSDBDriver.PROPERTY_KEY_BACKEND_WRITE_THREAD_NUM, "5");
    // Enable this option to automatically reconnect when the connection is broken
    properties.setProperty(TSDBDriver.PROPERTY_KEY_ENABLE_AUTO_RECONNECT, "true");
    // The maximum time to wait for a write request to be processed by the server in
    // milliseconds
    properties.setProperty(TSDBDriver.PROPERTY_KEY_MESSAGE_WAIT_TIMEOUT, "5000");
    // Enable this option to copy data when modifying binary data after the
    // `addBatch` method is called
    properties.setProperty(TSDBDriver.PROPERTY_KEY_COPY_DATA, "false");
    // Enable this option to check the length of the sub-table name and the length
    // of variable-length data types
    properties.setProperty(TSDBDriver.PROPERTY_KEY_STRICT_CHECK, "false");

    try (Connection conn = DriverManager.getConnection(url, properties)) {
        init(conn);

        // If you are certain that the child table exists, you can avoid binding the tag
        // column to improve performance.
        while (true) {
            doInsert(conn);
        }
    } catch (Exception ex) {
        // please refer to the JDBC specifications for detailed exceptions info
        System.out.printf("Failed to insert to table meters using efficient writing, %sErrMessage: %s%n",
                ex instanceof SQLException ? "ErrCode: " + ((SQLException) ex).getErrorCode() + ", " : "",
                ex.getMessage());
        // Print stack trace for context in examples. Use logging in production.
        ex.printStackTrace();
        throw ex;
    }
}

private static void doInsert(Connection conn) throws SQLException {
    String sql = "INSERT INTO power.meters (tbname, groupid, location, ts, current, voltage, phase) VALUES (?,?,?,?,?,?,?)";
    for (int j = 0; j < NUM_OF_ROW; j++) {
        try (PreparedStatement pstmt = conn.prepareStatement(sql)) {
            long current = System.currentTimeMillis();

            int rows = 0;

            // To simulate real-world scenarios, we adopt the approach of writing a batch of
            // sub-tables, with one record per sub-table.
            for (int i = 1; i <= NUM_OF_SUB_TABLE; i++) {
                pstmt.setString(1, "d_" + i);

                pstmt.setInt(2, i);
                pstmt.setString(3, "location_" + i);

                pstmt.setTimestamp(4, new Timestamp(current + j));
                pstmt.setFloat(5, random.nextFloat() * 30);
                pstmt.setInt(6, random.nextInt(300));
                pstmt.setFloat(7, random.nextFloat());

                // when the queue of backend cached data reaches the maximum size, this method
                // will be blocked
                pstmt.addBatch();
                rows++;
            }

            pstmt.executeBatch();

            if (rows % 50000 == 0) {
                // The semantics of executeUpdate in efficient writing mode is to synchronously
                // retrieve the number of rows written between the previous call and the current
                // one.
                int affectedRows = pstmt.executeUpdate();
                Assert.equals(50000, affectedRows);

            }
            log.info("成功写入 {} 条数据", rows);
        }
    }

}

private static void init(Connection conn) throws SQLException {
    try (Statement stmt = conn.createStatement()) {
        stmt.execute("DROP DATABASE IF EXISTS power");
        stmt.execute("CREATE DATABASE IF NOT EXISTS power");
        stmt.execute("USE power");
        stmt.execute(
                "CREATE STABLE IF NOT EXISTS power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (groupId INT, location BINARY(24))");
    }
}

}

Trackr2014 avatar May 26 '25 07:05 Trackr2014

Image

Trackr2014 avatar May 26 '25 07:05 Trackr2014

请升级一下到 3366 ,修复了一个内存泄漏的问题

yu285 avatar May 27 '25 09:05 yu285

我现在用3.3.6.6正在测,但是发现3.3.6.6用ws的高性能写入方式,并发线程数32,会出现下图错误。这个时什么bug?

Image

Trackr2014 avatar May 28 '25 01:05 Trackr2014

3.3.6.6实测依然会缓慢增长。运行1小时,从11.7G增加到14.8G。依然存在bug

Image

Trackr2014 avatar May 28 '25 02:05 Trackr2014

现在18.2G Image

Trackr2014 avatar May 28 '25 06:05 Trackr2014

现在有解决方法吗?我也碰到了内存飙升的问题

apricot-flower avatar May 29 '25 09:05 apricot-flower

现在有解决方法吗?我也碰到了内存飙升的问题 我试过使用原生方式(jdbc:TAOS:)没有问题,但是写入性能比使用jdbc:TAOS-WS:低很多。

Trackr2014 avatar May 30 '25 03:05 Trackr2014

3.3.6.6实测依然会缓慢增长。运行1小时,从11.7G增加到14.8G。依然存在bug

Image

3366 还有泄漏的话,您加我一下微信,我们具体排查一下吧 。

a15652223354

yu285 avatar May 30 '25 03:05 yu285

@Trackr2014 能否加一下微信 a15652223354,我们分析一下。 您可以升级下最新版本 3.3.6.9,然后观察下。

sheyanjie-qq avatar May 30 '25 03:05 sheyanjie-qq

请问taosadapter 可以单独升级吗? 我的版本是:

taosadapter --version

taosadapter v_branch_HEAD-d8059ff

Aironchu avatar Jun 09 '25 08:06 Aironchu

@Trackr2014 这个问题解决了吗?

centychen avatar Aug 19 '25 04:08 centychen