Mark Callaghan

Results 59 comments of Mark Callaghan

So that I understand, you are only asking about performance when compaction is disabled, so all that happens is memtable flushes to L0? I don't have much experience in that...

I don't know whether the perf numbers/problems you describe occur during load (memtable flushes only) or during final compaction. I don't understand the throughput numbers you quote for lz4 and...

Did you set a value via --threads ? What version or commit of RocksDB are you using? I can't reproduce this using a much smaller value for --num and RocksDB...

In past tests I usually see RSS with glibc malloc being 2X or more larger than RSS with jemalloc. One example - https://smalldatum.blogspot.com/2018/04/myrocks-malloc-and-fragmentation-strong.html RocksDB can be a stress test for...

On a small server I have at home running the db_bench benchmarks with one client thread I get ~70k Puts/s from overwriteandwait with RocsDB 8.7.2 and 8.8.0 when compaction_readahead_size=2MB and...

On a large server the insert rate from overwrite improves by 13.4% when I reduce compaction_readahead_size from 2MB to 1MB so that it is smaller than the value of max_sectors_kb...

More results [are here](https://smalldatum.blogspot.com/2023/11/debugging-perf-changes-in-rocksdb-86-on.html).

> > On a large server the insert rate from overwrite improves by 13.4% when I reduce compaction_readahead_size from 2MB to 1MB so that it is smaller than the value...

More results [are here](https://smalldatum.blogspot.com/2024/01/rocksdb-8x-benchmarks-large-server-io.html) to show that throughput for overwrite drops by ~5% in 8.7 and probably in 8.6. From iostat I see that the average read size is much...

And more results [are here](https://smalldatum.blogspot.com/2024/01/explaining-changes-in-rocksdb.html) to show the impact of compaction_readhead_size set to the default (2MB), 1MB and 512KB.