rocksdb icon indicating copy to clipboard operation
rocksdb copied to clipboard

Memory leak when providing a block cache

Open krishan1390 opened this issue 9 months ago • 1 comments

Hi,

We have noticed a memory leak when we supply a block cache when opening rocks db.

Attached the code. Using 10.2.1 rocks db version.

Observations on running this program -

  1. The process RSS memory increases and doesn't come back down after all resources are closed. The memory isn't reclaimed if we force a GC too. The process's RSS memory ~= block + row cache size after all resources are closed and GC is forcefully run .
  2. Tested by running with jemalloc too and the leak remains so it shouldn't be a fragmentation problem.
  3. The memory isn't reclaimed if other processes require it. Running the same java program repeatedly results in an OOM.
  4. Strangely, there is no memory leak in macOS. The RSS memory increases during the execution but comes back down to baseline (~50 MB) after the resources are closed. So the memory leak exists on linux.

krishan1390 avatar May 28 '25 13:05 krishan1390

  static {
    RocksDB.loadLibrary();
  }

  final long BLOCK_CACHE_SIZE = 1024 * 1024 * 1024; // 1GB
  final long ROW_CACHE_SIZE = 100 * 1024 * 1024;   // 100MB
  final int NUM_ENTRIES = 2000 * 1000;
  final int VALUE_SIZE = 1024; // 1KB value

  Cache blockCache;
  Cache rowCache;
  Options options;
  RocksDB rocksDB;

  RocksDBMemoryTest(String path)
      throws RocksDBException {
    setCaches();
    options = createOptions(); // This sets up options for the DB and its default column family
    rocksDB = RocksDB.open(options, path);
  }

  void setCaches() {
    blockCache = new HyperClockCache(BLOCK_CACHE_SIZE, 0, -1, false);
    rowCache = new LRUCache(ROW_CACHE_SIZE);
  }

  Options createOptions() {
    BlockBasedTableConfig tableConfig = new BlockBasedTableConfig();
    tableConfig.setBlockCache(blockCache);

    Options opts = new Options();
    opts.setCreateIfMissing(true);
    opts.setRowCache(rowCache);
    opts.setTableFormatConfig(tableConfig);

    return opts;
  }

  void insertAndIterate()
      throws RocksDBException {
    System.out.println("Inserting data...");
    byte[] valueBuffer = new byte[VALUE_SIZE];
    for (int i = 0; i < NUM_ENTRIES; i++) {
      byte[] key = ("key-" + i).getBytes(StandardCharsets.UTF_8);
      byte[] value = rocksDB.get(key);
      if (value == null) {
        rocksDB.put(key, valueBuffer);
      }
    }

    System.out.println("Iterating over data. This will trigger the block cache increase");
    try (RocksIterator rocksIterator = rocksDB.newIterator()) {
      rocksIterator.seekToFirst();
      while (rocksIterator.isValid()) {
        rocksIterator.next();
      }
    }
  }

  void closeRocks() {
    rocksDB.close();
    options.close();
    blockCache.close();
    rowCache.close();
    System.out.println("RocksDB and caches closed, memory should be released.");
  }

  public static void main(String[] args)
      throws RocksDBException {
    String path = args[0];
    deleteDirIfExists(path);

    RocksDBMemoryTest rocksDBMemoryTest = new RocksDBMemoryTest(path);
    rocksDBMemoryTest.insertAndIterate();
    rocksDBMemoryTest.closeRocks();

    // Keep process alive for memory inspection
    System.out.println("Process will stay alive for memory inspection. Press Ctrl+C to exit.");
    try { Thread.sleep(Long.MAX_VALUE); } catch (InterruptedException e) {
      Thread.currentThread().interrupt();
      System.out.println("Sleep interrupted, exiting.");
    }
  }

  static void deleteDirIfExists(String path) {
    try {
      java.nio.file.Path directoryPath = java.nio.file.Paths.get(path);
      if (java.nio.file.Files.exists(directoryPath)) {
        java.nio.file.Files.walk(directoryPath)
            .sorted(java.util.Comparator.reverseOrder())
            .forEach(p -> {
              try {
                java.nio.file.Files.delete(p);
              } catch (java.io.IOException e) {
                System.err.println("Failed to delete: " + p + " - " + e.getMessage());
              }
            });
        System.out.println("Existing directory deleted: " + path);
      }
    } catch (java.io.IOException e) {
      System.err.println("Error cleaning directory: " + e.getMessage());
    }
  }

krishan1390 avatar May 28 '25 13:05 krishan1390

Following. I've been experiencing similar issues (possibly related to https://github.com/facebook/rocksdb/issues/12942 and https://github.com/facebook/rocksdb/issues/12579) and was not able to reproduce in MacOS by profiling memory allocs / leaks with XCode instruments

leonardoarcari avatar Jul 14 '25 09:07 leonardoarcari