memtier_benchmark
memtier_benchmark copied to clipboard
Latency decreases as throughput increases under rate-limiting
Using the recently introduced --rate-limit option, I've ran a simple benchmark where the rate limit doubles each time, to see the relation between the latency and the throughput as the sustained throughput increases.
Here are the results, as well as the command to reproduce (the server is Redis)
$ for i in 1000 2000 4000 8000 16000 32000 64000 128000; do memtier_benchmark -h 10.20.1.4 --hide-histogram --test-time 30 --threads 1 --clients 50 --rate-limit $((i/50
)); done 2>/dev/null | grep -E "(Type|Totals)"
Type Ops/sec Hits/sec Misses/sec Avg. Latency p50 Latency p99 Latency p99.9 Latency KB/sec
Totals 1001.44 0.00 909.79 0.97415 0.75900 8.19100 10.04700 42.50
Type Ops/sec Hits/sec Misses/sec Avg. Latency p50 Latency p99 Latency p99.9 Latency KB/sec
Totals 2001.06 0.00 1817.78 0.81948 0.71100 6.23900 9.72700 84.94
Type Ops/sec Hits/sec Misses/sec Avg. Latency p50 Latency p99 Latency p99.9 Latency KB/sec
Totals 4000.67 0.00 3635.77 0.77258 0.62300 6.27100 9.79100 169.73
Type Ops/sec Hits/sec Misses/sec Avg. Latency p50 Latency p99 Latency p99.9 Latency KB/sec
Totals 7999.25 0.00 7271.14 0.69692 0.63100 2.62300 9.40700 339.32
Type Ops/sec Hits/sec Misses/sec Avg. Latency p50 Latency p99 Latency p99.9 Latency KB/sec
Totals 15747.45 0.00 14314.50 0.66834 0.63900 1.31100 8.70300 667.98
Type Ops/sec Hits/sec Misses/sec Avg. Latency p50 Latency p99 Latency p99.9 Latency KB/sec
Totals 31851.22 1.67 28953.36 0.65727 0.64700 1.19100 7.03900 1350.96
Type Ops/sec Hits/sec Misses/sec Avg. Latency p50 Latency p99 Latency p99.9 Latency KB/sec
Totals 63371.75 6.67 57603.23 0.64743 0.65500 1.15900 4.22300 2688.16
Type Ops/sec Hits/sec Misses/sec Avg. Latency p50 Latency p99 Latency p99.9 Latency KB/sec
Totals 78407.32 24.70 71254.00 0.63731 0.63900 1.07900 4.25500 3326.53
The results are surprising - all the latency metrics seems to decrease, as the throughput increases. My expectation from similar benchmarks is to see the latency gradually increase as the sustained throughput increases, to the point it explodes when a bottleneck is reached. Is there an issue with the latency calculation or is this expected?
Thanks!