quick-cache
quick-cache copied to clipboard
Weight capacity per shard and highly varing cache item weights
Hi,
I'm applying quick-cache to my project. In my project, the environment and configurations are as follows:
- The number of CPU cores is high (about 200)
- The weights for each cache item vary highly.
- The total cache capacity is not so high
Given that, when I set the estimated_items_capacity
to the size of the average cache item or smaller value, the number of shards becomes too high and the weight capacity per shard becomes too small - smaller than the size of a single larger cache item. In this case, it seems that the cache does not cache the larger cache items.
So, currently, I'm adjusting the estimated_items_capacity
value to prevent that situation.
But I'm also worried that the number of shards becomes too small to ensure the concurrency.
If the number of shards becomes 1, what happens to the concurrency? Can the async insert functions run concurrently, or do the executions become serialized? Is there a way to share the weight capacity across the shards?
Thank you.
That's a lot of cores :scream: I suggest manually setting up the number of shards to balance out these large item sizes and parallelism ceiling.
If the number of shards becomes 1, what happens to the concurrency? Can the async insert functions run concurrently, or do the executions become serialized?
Parallelism will be like a RwLock with 1 shard. Methods like get_or_insert_async
(assuming that's what you mean with async insert) will still run concurrently.
@arthurprs Thank you.
Parallelism will be like a RwLock with 1 shard. Methods like
get_or_insert_async
(assuming that's what you mean with async insert) will still run concurrently.
Yes, I meant get_or_insert_async
. Thank you.
Closing for the time being, let me know if you have any other questions.