FusionCache
FusionCache copied to clipboard
Absolute expiration time is not preserved during recovering from distributed cache to in-memory cache
The current implementation does not take into account the AbsolutExpiration set during the addition of the entry to the IDistributeCache
https://github.com/jodydonetti/ZiggyCreatures.FusionCache/blob/f1380b7c562eeb610c58490fd7e7f63bf69e0569/src/ZiggyCreatures.FusionCache/FusionCache_Sync.cs#L145
This may lead to the hidden extension of the lifetime of the entry (especially for the TryGet[Async] method).
As IDistributedCache does not return back the expiration time, it's possible to extend the Metadata payload with the calculated absolute expiration and use it during the creation of the entry in the in-memory cache.
Also, it is worth reflecting in the documentation that the current implementation uses FusionCacheEntryOptions for the restored entires.
Hi @yzhoholiev , and thanks for considering FusionCache! Honestly I think you probably have a point here, but I also remember reasoning about this aspect back at the time when I did it. Anyway let me look a little bit more into this in the next few days, and I will come back to you. Thanks!
Hi @yzhoholiev , sorry for the delay but after 2 years covid finally got me 🤒.
Anyway I've looked more into this, and here are some random things I'm thinking about:
- currently FusionCache already saves the
LogicalExpirationinto the metadata - I may be able to use that
LogicalExpirationas the expiration for the local memory cache, so to have everything aligned (+/- some jittering, if configured) - the metadata though is (currently) saved only if fail-safe is enabled, to not waste resources (memory + bandwidth)
- therefore to proceed this way I would be forced to always save the metadata (consuming more resources)
- that would ok to have a better, more precise behavior
- also I can limit this change only for the distributed cache (so that the expiration will flow through different nodes) and not for the memory cache
The thing is not super straightforward when considering all possibilities, since it involves all possible cases of entries that are from memory or distributed, that are fresh or stale, that are with or without fail-safe enabled, etc...
I'll take some more time to marinate all these thoughts, experiment with some changes and will come back to you as soon as possible.
Hi @yzhoholiev , I just pushed a change that should fix what you reported.
A couple of notes regarding the change:
- now, when creating a memory cache entry from a distributed cache entry, the logical expiration will be used if there's metadata, even if fail-safe is not enabled for that specific call
- to allow for the logical expiration to be there even when fail-safe is not enabled, FusionCache now always includes the metadata in the distributed cache entry
- if specified, jittering will still be added locally on top of the logical expiration from the distributed cache, so to respect that settings (which btw is disabled by default)
- this new version may consume a little bit more memory than the previous ones since it always include the metadata, but only in the cases where fail-safe was not enabled (because previously, if fail-safe was enabled, metadata was being included already), and all in all I think this is not a big deal
This allows us to have a more precise local expiration in case an entry is coming from the distributed cache, on top of not consuming more memory in case there's no distributed cache involved.
Please let me know what you think: in case everything looks good also to you, I'll push a new release asap.
Thanks!
Hi @yzhoholiev , I just saw your 👍 reaction: just so I know what to do, the thumbs up was like "ok I'll take a look at that and let you know" (so I'll wait) or "I looked at that and it is good to me" (so I move on and release a new version)? Thanks!
@jodydonetti sorry for the uncertainty. We will spend some time testing the implementation.
Got it, thanks!
The fix seems to be working!
Awesome, thanks for testing it! I'll release a new version in the next days, will update you.
Hey @yzhoholiev , I just released v0.13.0 which contains this fix, hope this helps.
Closing this issue.