Davide Angelocola
Davide Angelocola
@andimarek what is the result of this experiment?
we have the same issue: recently we moved from rocksdb 7.x + CLOCK_CACHE to rocksdb 8.x + LRU_CACHE . The limit (3GB) is not respected at all: the process is...
@zaidoon1 FYI: switching to HYPER_CLOCK_CACHE fixed the memory issue in our case. Maybe it is a valid workaround for you too (but we were using CLOCK_CACHE in the rocksdb 7.x)...
@zaidoon1 you're welcome! :-) Anyway, we are dealing with smaller range scan (up to few thousands) with caching enabled and 0 as estimated_entry_charge is working fine (maybe it could be...
Perhaps @ltamasi or @ajkr could help with the C++ part?
got same problem, 2 years later: the mysql-dist-5.6.21-linux-amd64.zip has 32bit binaries.
It would be nice to also upgrade to the latest stable version of mysql 5.6.x. Needless to say I can help testing it.
Attaching a simple benchmark that is trying to reproduce a very busy thread pool used for batching: ```java import java.time.Duration; import java.util.ArrayList; import java.util.List; import java.util.concurrent.ExecutionException; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors;...
@ajkr thanks! I triggered a manual compaction via Java API: ```java public void compactRange() throws RocksDBException { compactRange(null); } ``` but I still see a lot logs like that (RocksDB...
problem has been fixed by using `compactRange()` with `bottommost_level_compaction` thanks @ajkr @andlr!