KeyDB
KeyDB copied to clipboard
Performance degrade 6.3.1 vs 6.2.2
Hi, I tested on different systems, but every time the results are the same, 6.3.x works worse than 6.2.x.
redis-benchmark -q -n 1000000 --threads 64

Second that 👍 also latency grew for us 4-5x times and thats ALOT. Waiting for next release for it to be better and faster than older version 6.2x or at least same performance with new functionality and bugfixes.

Same thing for us. On a single server it works ok, we had a master / master cluster with a small dataset. 6 or 7 streams, a couple with 500,000 key streams, most of them are smaller. Usual assortment of zset, hset, keys etc.
We saw the same issues. Latency was so bad that the UI or underlying processes would receive a disconnect from the server.
Summary:
- AOF file loads are huge and time consuming
- AOF partial reloads fail almost every time
- Lots of errors regarding rreplay
- Memory usage is doubled or tripled
- Keydb itself stalls. We tried to adjust haproxy timeouts etc. We kept getting check errors where keydb would not respond to the haproxy check. Behavior was noticeable on the cli. Typing a simple command to check memory would hang for 1-3 seconds or more.
- We upsized and tried two different VM types and the problems continued. We tried both ARM and X86 and we saw the behavior on both versions of 6.3.1.
It really seems to be something with the replication that is eating up resources over time. Unfortunately we had to roll back to 6.2.2 for stability reasons and an approaching deadline.
Same here. We run a bunch of opertions for each event on a frequency of ~ 100 events/s. The operations are (in this order): get, set, expire, hget, hset expiremember On 6.2 it took ~40-50 ms (95% percentile). With 6.3.1, the required time increased to >200ms and the throuhput dropped to 50 events/s (so the impact is even higher that factor 4x).
CPU load of keydb increased significantly (~50%), memory usage by ~ 30-40%
Still no reaction from KeyDB dev on this major performance issue?
I'm afraid that such issue could divert current KeyDB users to other Redis alternatives such as dragonflydb or cachegrand ...
I think @JohnSully already clarified this here: https://github.com/Snapchat/KeyDB/issues/494#issuecomment-1271655744
I can see no clarification about this particular issue there.
@JohnSully with 6.3.2 and 6.3.3 are two stable releases available. Is there a plan, if/when this performance issue will be addressed?
Kind regards, Michael.
Hi @micw I have a 10% performance improvement coming if expires are used. The second highest priority is going to be FLASH performance.
I don’t have any immediate plans to address performance without expires but I’m expecting that to improve a bit as part of the flash investigation as we take more profiles.
Hello @JohnSully Thank you for the fast reply. Are the +10% compared to 6.2 or to 6.3.1 (which has the massive performance degrade described in this issue here)?
@micw can you confirm, then, that these issues persist in 6.3.3? Are you still using 6.2.2?
@JohnSully more generally than this ongoing issue, it seems like the development velocity of keydb has really slowed down tremendously as compared to before. Someone linked above to a comment you made about being unexpectedly focused on other projects within Snap, which is obviously fine. But do you anticipate returning focus to keydb at some point, and perhaps even an estimate of when? It would be helpful to me and others so that we can select our tooling accordingly. I'd REALLY love to use keydb, but it's hard to justify when it seems largely inactive. Thanks!