[Bug] Uneven Key Distribution Issue with Multiple Shards
Description
When we specify two or more shards, our current algorithm leads to uneven key distribution, causing some data to be overwritten and resulting in incorrect cache lengths. Please refer to #17, For example:
Note that this issue doesn't occur with a single shard, and once #24 is merged, there won't be any problems even when storing zero values.
cache := NewLRUCache[int, int](256, WithShards[int, int](2))
When initializing the above code, we set each shard's list length to 128:
shardsize := (uint32(size) + c.mask) / (c.mask + 1) // 128
shards[0].list = make([]ttlnode[K, V], shardsize+1)
shards[1].list = make([]ttlnode[K, V], shardsize+1)
This means each shard can store a maximum of 128 elements, And we use the following code to calculating which shard should store when set a key-value pair:
func Set(key, value) {
hash := hasher(key)
shardIndex := hash&c.mask // 0 or 1
return shards[shardIndex].Set(...)
}
Since the hash algorithm cannot guarantee even distribution, many keys might be assigned to the same shard. When the number of elements in a shard exceeds 128, previous data gets deleted. This leads to incorrect cache lengths as mentioned in #17.
for i := 0; i < 256; i++ {
cache.Set(i, i, 0) // assume 200 keys are assigned to shard 0
}
// The length should be 256, but it's actually shards[0].len + shards[1].len = 128 + 56 = 184
// Since 200-128 = 72 keys are overwritten in shard 0
fmt.Println(cache.Len())
Suggestion
Currently, there isn't a better solution to avoid this issue. Perhaps we could consider increasing the shardsize length, for example, setting it to size * 0.8.
shardsize := 256 * 0.8 = 204
shards[0].list = make([]ttlnode[K, V], shardsize+1)
shards[1].list = make([]ttlnode[K, V], shardsize+1)
However, this will still waste a lot of space either way. Please let me know your thoughts.
Agree, this a common issue of sharded LRU, that why this LRU hit rate is slightly lower than regular LRU.
Would you happen to have any better suggestions? Or could we change the hash function, such as using a consistent hashing algorithm?
I afraid that consistent hashing wont help here but I am not sure. Let me try to think about or could please give more hits/tips, thanks~