cortex icon indicating copy to clipboard operation
cortex copied to clipboard

Rule eval result caching to improve Ruler performance

Open alvinlin123 opened this issue 2 months ago • 1 comments

Is your feature request related to a problem? Please describe. It's not related to a problem, but I think I would like rule groups to be evaluated faster.

Describe the solution you'd like For a rule group with 2 rules, A, and B; B depends on A. If we cache the result of A, then B can used the cached result rather than waiting for A's result to be written to ingester then read from ingester.

This will be particularly useful when A produces a lot of time series where writing and querying ingestr will take time.

Describe alternatives you've considered No. Didn't think of other alternatives. Of course concurrent_evals_enabled would boost performance, be rule eval result caching would boost the performance further.

Additional context No.

alvinlin123 avatar Oct 10 '25 20:10 alvinlin123

Have you tried query-frontend usage from ruler? There are good reports from the community

I like your suggestion and at the same time I am not sure. This can generate inconsistencies if the series from A end up getting rejected by the ingesters.

Also there is concurrent_evals, which I have tried concurrent_evals, but it only works In a single ruler pod. A better concurrent_evals, one that scales one group to many pods, is much needed and a bigger problem.

What should we prioritize? low latency or horizontal scalability?

friedrichg avatar Oct 12 '25 17:10 friedrichg