rueidis icon indicating copy to clipboard operation
rueidis copied to clipboard

Client Side Caching with Otter

Open rueian opened this issue 1 year ago • 4 comments

Otter is a high performance lockless cache which uses proactive ttl and s3-fifo eviction policies.

Lockless is an important property that redis server-assisted client-side caching needs because the invalidation messages from redis should be applied as fast as possible. Applying invalidation messages should be the first priority if possible.

But in the current implementation, the pipelining goroutine handing invalidation messages competes with other reader goroutines to acquire the LRU lock. This will delay the invalidations and further block the pipeline. Using otter can solve this.

rueian avatar Mar 06 '24 15:03 rueian

@rueian provide an option to use https://github.com/phuslu/lru? otter is slow on set but fast on get, phuslu lru is more balanced and uses lesser mem

meaning for cache write intensive workload, better to use phuslu, for read intensive, better to use otter

ouvaa avatar Mar 21 '24 08:03 ouvaa

@ouvaa Do you have insights on why phuslu is better at write?

1a1a11a avatar Mar 21 '24 15:03 1a1a11a

Hi @ouvaa, rueidis is read intensive but writes should be prioritized.

rueian avatar Mar 22 '24 17:03 rueian

@rueian Do you have a work in progress branch with otter we can test with?

sshankar avatar Oct 09 '24 14:10 sshankar

Hi @sshankar, sorry for my late reply.

Unfortunately, my progress on this is currently paused. I haven't finished adding singleflight load mechanism to otter. That is a key feature that otter currently doesn't have. If we build the mechanism outside of otter, we will need to pay high overhead for it.

What makes you interested in otter?

rueian avatar Oct 22 '24 04:10 rueian

hi @rueian, have you made any more progress on this one? I just faced with the problem that when I heavy write/update cache, profiling show more contention in LRU (the default cache)

04116 avatar Jan 20 '25 18:01 04116

Hi @04116, as just mentioned previously, the progress on this is currently paused. On the other hand, I am working on a flattened cache implementation here https://github.com/redis/rueidis/pull/712 to have more accurate cache size estimation and lower GC overhead.

Would you mind providing your profiling result and how do you use the cache for us to better understand your situation?

rueian avatar Jan 20 '25 19:01 rueian

Hi @rueian

not sure it actually the problem, just saw them on top of profile Image Image

ahapeter avatar Jan 21 '25 06:01 ahapeter

Hi @ahapeter, thanks for your profile. I can't really tell the problem but it seems that there was something related to GC overhead in it. I think it may work well with the new ongoing flattened cache implementation https://github.com/redis/rueidis/pull/712.

rueian avatar Jan 21 '25 17:01 rueian