Client Side Caching with Otter
Otter is a high performance lockless cache which uses proactive ttl and s3-fifo eviction policies.
Lockless is an important property that redis server-assisted client-side caching needs because the invalidation messages from redis should be applied as fast as possible. Applying invalidation messages should be the first priority if possible.
But in the current implementation, the pipelining goroutine handing invalidation messages competes with other reader goroutines to acquire the LRU lock. This will delay the invalidations and further block the pipeline. Using otter can solve this.
@rueian provide an option to use https://github.com/phuslu/lru? otter is slow on set but fast on get, phuslu lru is more balanced and uses lesser mem
meaning for cache write intensive workload, better to use phuslu, for read intensive, better to use otter
@ouvaa Do you have insights on why phuslu is better at write?
Hi @ouvaa, rueidis is read intensive but writes should be prioritized.
@rueian Do you have a work in progress branch with otter we can test with?
Hi @sshankar, sorry for my late reply.
Unfortunately, my progress on this is currently paused. I haven't finished adding singleflight load
mechanism to otter. That is a key feature that otter currently doesn't have. If we build the mechanism outside of otter, we will need to pay high overhead for it.
What makes you interested in otter?
hi @rueian, have you made any more progress on this one? I just faced with the problem that when I heavy write/update cache, profiling show more contention in LRU (the default cache)
Hi @04116, as just mentioned previously, the progress on this is currently paused. On the other hand, I am working on a flattened cache implementation here https://github.com/redis/rueidis/pull/712 to have more accurate cache size estimation and lower GC overhead.
Would you mind providing your profiling result and how do you use the cache for us to better understand your situation?
Hi @rueian
not sure it actually the problem, just saw them on top of profile
Hi @ahapeter, thanks for your profile. I can't really tell the problem but it seems that there was something related to GC overhead in it. I think it may work well with the new ongoing flattened cache implementation https://github.com/redis/rueidis/pull/712.