Add memory pooling for chunks fetched from cache when fine-grained caching is enabled
What this PR does
We've just rolled out the fine-grained chunks fetching and caching in zone-a of a dev cluster and all metrics are better than other 2 zones (where the new feature is disabled). This is good!
However, I noticed that the allocated bytes / sec in zone-a is a bit higher compared to other zones. After some profiling, looks like the main difference is given by memcache.Client.GetMulti():

The old implementation uses a memory pool for the data fetched from the cache, while the new one doesn't. I think (but not sure) that @dimitarvdimitrov experimented with it and didn't see any real benefit. However, I would like to challenge it and experiment again with a memory pool (see my test results in https://github.com/grafana/mimir/pull/4255#issuecomment-1437209112).
Which issue(s) this PR fixes or relates to
N/A
Checklist
- [ ] Tests updated
- [ ] Documentation added
- [ ]
CHANGELOG.mdupdated - the order of entries should be[CHANGE],[FEATURE],[ENHANCEMENT],[BUGFIX]
I've deployed it for 75 minutes in a dev cluster under normal load (no load testing running):
- zone-a:
mainversion, fine-grained chunks caching enabled - zone-b: this PR, fine-grained chunks caching enabled

We can also see the impact of the memory pooling looking at continuous profiling:

Since I'm not in a hurry, I will wait for Dimitar to come back from PTO. I would like his opinion too.