Dynamical growth of the LRU
In some situations it is not feasible to create an LRU with the full size.
Instead, to save memory consumption, the LRU should start with a small size, automatically growing until the configured size is reached.
So if len < size, adding items will grow the lru like a map.
With len >= size, the LRU functionality is enabled.
This should take lifetime into account.
Maybe an additional feature request for the dynamical growth: Instead if regular incremental adds to the LRU map, the LRU map should grow in stages.
Example:
Max LRU size: 1024 stages: 8 Initial LRU size: 128
At the moment, the 129th element is added to the LRU map, the LRU map should grow its capacity to 256 elements. With the 257th element, the capacity of the LRU map should grow to 384 elements. And so on. Until the LRU map reaches its max LRU size.
Now sure I understand the algorithm here. Are you suggesting linear stages? If so, each stage would increase the size by 112 ((1024-128)/8) = 112 (or 128 if you count in the initial size as one stage).
The standard resize scheme for arrays and maps is * 2, so exponentially increasing, which has several advantages over a linear approach. Maybe we start with that and add a linear approach later if requested!?
If so, each stage would increase the size by 112 ((1024-128)/8) = 112 (or 128 if you count in the initial size as one stage).
This example sounds more like 8 stages of 112 and one of 128 elements.
The standard resize scheme for arrays and maps is * 2, so exponentially increasing, which has several advantages over a linear approach.
With the implementation of Swisstables, this assumption is no longer correct. Now, once a map reaches a certain threshold (a maxTableCapacity of 1024 elements), instead of doubling, it will be split into multiple, smaller sub-tables. This allows the map to grow more gradually. As more data is added, only the necessary sub-tables are created or expanded, leading to smoother, more predictable performance without the large, sudden memory allocations of the past.
The problem with this approach is, that the (Go internal hashing, not the hasing of this package) hasing of these subtables is substential to their growth and/or split into more subtables. Looking at users of this package, most of them are using a LRU size larger than 1024. So I think it is important to provide users some control of the growth of their LRU.
So my suggestion is to introduce stages. The inital stage should be maxSize/stages. Maybe this is already sufficient to start with and let the Swisstable implementation do the rest.
But as this package supports Go 1.18, these users will not benefit from Swisstables and so a linear growth, as suggested could be beneficial.
As more data is added, only the necessary sub-tables are created or expanded, leading to smoother, more predictable performance without the large, sudden memory allocations of the past.
Thanks for the background, that is quite interesting in hindsight of a future rework of FreeLRU.
FreeLRU isn't using Go maps because the map and list algorithm are amalgamated into a single algorithm here to avoid GC overhead (the main purpose of this project!).
Using chunks of separate heap allocations internally is possibly the opposite of what this project is aiming for. At least it needs some kind of benchmarking the effect on GC.
So I think it is important to provide users some control of the growth of their LRU.
Maybe the fastest way forward would be a user-provided growth function. That allows the most flexible adaption to user needs.