add expiration_time to cache
Hi, I want to use your great package, but I need a feature to deal with expiration for cache elements. So I added expiration_time to alru_cache and wrote a simple test. Do you want to update the cache? Let me know if you want and feel free to give suggestions. Thank you!
Codecov Report
Merging #50 into master will decrease coverage by
2%. The diff coverage is88%.
@@ Coverage Diff @@
## master #50 +/- ##
====================================
- Coverage 100% 98% -2%
====================================
Files 1 1
Lines 136 150 +14
Branches 24 28 +4
====================================
+ Hits 136 147 +11
- Misses 0 2 +2
- Partials 0 1 +1
| Impacted Files | Coverage Δ | |
|---|---|---|
| async_lru.py | 98% <88%> (-2%) |
:arrow_down: |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact),ø = not affected,? = missing dataPowered by Codecov. Last update e58a868...e292b73. Read the comment docs.
Hey, it mostly should be separate package cuz lru is not designed for that.
My suggestion is just each N seconds clear lru at all, if You need some kind of ttl
Also I've seen https://github.com/krkd/aiottl , which looks promising for ttl cache, but it is seems not fully ready yet
Hi @hellysmile,
I wanted to ask whether you would accept this PR if I edited your suggestions? Because https://github.com/aio-libs/async_lru/pull/50#issuecomment-400426038 it seems that you don't want this to be in your package (and you have all rights to say that). But I can see a use case for that, especially in async world.
We are using async_lru as a cache for multiple coroutine call. We have a backend application that retrieves data from a database, does some computation and then sends it to frontend. It may happen that some request generates multiple calls of a coroutine with the same arguments and it's handy when I can just place an alru_cache decorator to save resources. However, data in the database are being updated and I need to invalidate the cache after some time. Now, easiest would be to add just expires keyword argument that would invalidate cache for me.
Maybe I could have a loop that invalidates the cache for me but it's now used on several places and I think it would be better to put it right into aynsc_lru package.
What do you think? Will you accept the PR if I finish it?
I created a new PR https://github.com/aio-libs/async_lru/pull/131
In my mind async_lru follows functools.lru_cache design, this is the very strong point.
lru_cache has no functionality for time-based expiration.
Why should we have it in async version of lru_cache?
I agree with @hellysmile If you want ttl based expiration you need another library.
It would just suit me very well if async_lru would have the time-based expiration. The other library you are talking about will have all the features of async_lru plus expires so I will probably resolve this by having my fork of this library.
I cannot tell why lru_cache does not have this functionality but it certainly is one of the valid strategies: https://en.wikipedia.org/wiki/Cache_replacement_policies#Time_aware_least_recently_used_(TLRU).
Also, alru_cache already has more arguments / method than lru_cache, e.g. cache_exceptions argument or invalidate method.
If you do not want to have it the library you manage I respect it and will release my own package.
@hellysmile ?
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.
Implemented by c46ada7