Zaidoon Abd Al Hadi

Results 44 comments of Zaidoon Abd Al Hadi

at least in the above instance, I have something "obvious" to blame, this happened again on a different machine. Except, this time, it doesn't look like we exceeded the configured...

@ajkr any idea what could have happened here in both cases? I guess the easiest one to answer is how/why rocksdb went above the allocated LRU cache size? Unfortunately, I...

I was thinking of using strict LRU capacity but it looks like reads (and writes?) will fail if the capacity is hit which is not expected. Why don't we evict...

Here is more data: looks like it happens when we have lots of tombstones. This appears to match what was happening in https://github.com/facebook/rocksdb/issues/2952 although the issue there was due to...

> What allocator are you using I'm using jemalloc for the allocator (i've double checked this). In the last instance this happened (screenshot above), block cache was not maxing out...

1. Are the 5K requests in parallel? yes they are 2. I've enabled cache_index_and_filter_blocks, everything else is whatever the default is (I'll need to check the default for unpartitioned_pinning) 3....

Also here is the db options I have configured: [db_options.txt](https://github.com/facebook/rocksdb/files/15196110/db_options.txt)

ok this is good to know, i'll definitely investigate this part. I would like to confirm, If we assume that's the problem, then my options are: 1. set cache_index_and_filter_blocks to...

cool, i'll check this out and just to double check, is unpartitioned_pinning = PinningTier::kAll more prefered than setting cache_index_and_filter_blocks to false?

great! Thanks for confirming, once the c api changes land I'll experiment with this and report back