[5.x] Performance Optimisation for Stache - Stache Batch Get Items for Redis/Memcached/DynamoDB - Reduce Network Overhead
Description
Optimises Stache store item retrieval by using batch cache operations for Redis, Memcached, and DynamoDB users.
- Adds
getItems()method toBasicStorethat fetches multiple items efficiently - Groups keys by child store in
AggregateStorebefore delegating - Uses
cache()->many()for batch fetching (single MGET for Redis) - Only enables batch mode for network-based cache drivers (Redis, Memcached, DynamoDB)
- File/Array cache users continue using individual lookups (due to no network overhead)
Performance Impact
For Redis/Memcached users fetching many items (eg. search results). I wrote a little benchmark tool and this is more or less the summary of the speed improvements. I have only benchmarked using Redis but given they're all in-memory stores, it should be roughly the same for Dynamo/Memcached:
| Items | Batch (new) | Individual (old) | Improvement | Speedup |
|---|---|---|---|---|
| 100 | 0.38 ms | 4.61 ms | 91.8% | 12x |
| 1000 | 2.42 ms | 27.76 ms | 91.3% | 11.5x |
| 5000 | 11.54 ms | 134.52 ms | 91.4% | 11.7x |
Approx 12x speedup for Redis users. The batch approach using MGET is much faster than N individual GET calls.
For File/Array cache users: no change (driver detection skips batch mode). With in-memory Array cache, the improvement is minimal because there's no network latency to optimise. The overhead of building the batch request can even make it slightly slower for small datasets, so it's skipped.
Use Cases
This improvement benefits any code path that retrieves multiple Stache items at once such as:
- Search results hydration
- Entry queries returning many results
- Bulk operations on entries/terms/assets
@jasonvarga I've gotten a little carried away with performance improvements and found this too. Keen to hear your thoughts. I'm not 100% sure the shouldUseBatchCaching checking which stores are used is the best approach - just need to not run for disk storage basically.