quickwit icon indicating copy to clipboard operation
quickwit copied to clipboard

Improve Cache Admission logic

Open fulmicoton opened this issue 2 years ago • 0 comments

Right now we use a simple LRU logic with a minor sophistication introduced in https://github.com/quickwit-oss/quickwit/pull/1732, we do not evince items that are too fresh.

Time here is used as an approximate proxy for the following logic: If two items were requested in the context of the same request, it does not make sense to evince the first one in order to store the second one. In fact, it is counter productive if the same query, or a variation of it, is to be executed again (that would yield a 0 hit rate if all of the fast fields do not fit in RAM for instance).

We want a cache policy that works decently for both regime.

NOTES: I wonder if it is possible to tweak an LFU admission policy in order to get the same effect. stretto and moka are two battle tested cache solutions that might seem like a good pick. Unfortunately they do not fit our need.

They target a highly concurrent small k,v use case, and do not strictly respect the limits they are assigned. All data is stored, and eventually GCed by a background task.

In our use case, we get/store very large expensive objects, at a pace of few hundreds per seconds.

fulmicoton avatar Jul 07 '22 00:07 fulmicoton