Engine/MemoryCache: Stage 1 improvements (RFC #2647)
Summary
This PR introduces the first round of improvements for the new in-memory cache in LiteDB 5.
Goal: make the cache more predictable, configurable, and robust under heavy concurrent load, while keeping all tests stable.
Key changes
-
Volatile on scalar tuning fields
- Marked scalar knobs like
_maxFreePages,_cleanupBatchSize,_extendsasvolatileto improve visibility across threads. - Time-based fields (
_idleBeforeEvict,_minCleanupInterval) remain asTimeSpan(structs cannot bevolatile).
- Marked scalar knobs like
-
Safer on-demand cleanup triggers
- Removed cleanup triggers from
NewPage()andGetWritablePage()to avoid interfering with segment extension andUniqueIDsequencing (fixes the failingCache_UniqueIDNumberingtest). - Cleanup still runs in the right places:
GetReadablePage()(hot read path),DiscardPage()(when pages are freed),Clear()(manual cleanup),- and when the free-list exceeds
_maxFreePages(forced).
- Bounded, non-reentrant cleanup (
SemaphoreSlim) with batched eviction and idle policy.
- Removed cleanup triggers from
-
Accurate comments & readability
- Updated
Cleanup()docs to reference_cleanupBatchSizeand_idleBeforeEvict(instead of stale constants). - Small English B1/B2 comment pass; clarified eviction/cleanup intent.
- Updated
-
Profiles groundwork
- Added
CacheProfile { Mobile, Desktop, Server }andApplyProfile(profile). - Current defaults equal the Desktop profile.
- Future work: pass the profile through an engine parameter (
// TODOmarker left in code).
- Added
Motivation
LiteDB is often used in embedded scenarios with high concurrency and limited resources.
These changes make cache tuning safer and clearer without adding locking overhead or altering public APIs.
Impact
- No breaking API changes
- No behavioral regressions: all tests pass with updated cleanup logic
- Performance: negligible overhead for volatile scalars; improved stability for live-tuned reads
Related
- RFC: #2647
- Bug Refs:
- #1756
- #2619
Next steps
- Wire
CacheProfileselection via engine parameter (Etapa 2). - Add targeted stress tests for eviction/idle behavior under extreme concurrency.
Checklist
- [x] Target branch:
dev-staging - [x] Builds and tests pass locally
- [x] No public API changes
- [x] Documentation/comments updated
- [x] Style consistent with project guidelines
Notes for reviewers
- This work was iterated with assistance from ChatGPT and Amazon Q for code review and validation ideas.
- The cache was tested under heavy stress scenarios (high concurrency + forced cleanup) and behaved as expected without leaks or corruption.
@codex review
Codex Review: Didn't find any major issues. You're on a roll.
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting
- @codex fix this CI failure
- @codex address that feedback
Just the one i could confirm in the timeframe i have. Otherwise here is a full review from chatgpt: https://gist.github.com/JKamsker/395ddb25cd24f7447216f82d4672a42b I am also going through each one of those and validate its points - maybe you want to get ahead of me and also review its review - i think its quite solid
For the caching mechanism, something like this looks promising: Medium - Killing O(n): How Timing Wheels Expire 10 Million Keys Effortlessly in Golang. - putting cache items into timed buckets for a future worker to clean up
Hi @JKamsker
I haven't had time to review the GPT feedback regarding this change :(
Just as a comment, I'd like to add that this version (which is possibly incomplete) is much better than the current version of LiteDb 5, which has a severe memory leak.
My idea was to submit three versions: this would be the first, then the second with improvements, and the last with the final tuning, but as I said, it became impossible.
I'll do my best to resume this fix, which I believe is very necessary for LiteDb.
Thank you very much!