thanos
thanos copied to clipboard
Ruler not evaluating any rules
Thanos version used: Thanos v0.23.1, deployed as a sidecar
Object Storage Provider: S3
What happened:
- Thanos Ruler did not evaluate any rules, causing an alert to fire (alert definition being used is the same as
ThanosNoRuleEvaluations
in here) - Ruler pods were also up and healthy.
- After the ruler stopped evaluating, it did not log any lines.
- The Ruler memory profile was also affected:
- Issue was resolved after ruler pods were restarted
What you expected to happen:
- Rules donβt stop getting evaluated if there are rules to evaluate
- Ruler always evaluates in the specified interval
- In case ruler stops evaluating, logs are sent
Anything else we need to know: Screenshots that may help debugging the issue:
After restarting the pods:
This is epic report - @jessicalins thank you! Perfect pattern for providing all possible info π€
What about ${HTTP_IP}:${HTTP_PORT}/debug/pprof/goroutine?debug=1
of Rule when this happens? Could you please upload it? I'm still not sure what happened here.
Yup, too late ): Good point about go routines - we forgot. Let's capture it next time it happens. We lost all pprof things π€
I happen to have the same behaviour on my clusters since the v0.23.1 update.
Rollbacking Thanos Ruler to 0.22.0 did the trick for us to work around this issue.
Maybe I can help providing those pprof. I don't have experience with that right now but I can try the request you provided @GiedriusS
Edit: I spoke too soon: rollbacking to 0.22.0 didn't helped that much actually. We got our records values for some time but now those are missing again.
This is from one of our Thanos Ruler currently failing to process some records (I haven't figured out yet if this applies to all of them or not). Version 0.22.0.
I'm waiting to have some failures from a 0.23.1 ruler.
One lead we are testing right now for this issue: fine tuning Thanos Query & Query Frontend. We have increased some concurrency parameters & the likes to ensure that there is no bottleneck on query path that would slow down Thanos Ruler queries.
It is yet a bit too soon to draw any conclusions though as of now our thanos rulers records are way more stable
Update: increasing Thanos Query performances helped for some time but eventually our Thanos Rule instances ends up evaluating no rules at all. The only thing I can add is that the number of goroutines increases a lot when Thanos Rulers stops evaluating

So I suppose somethings clogs Thanos Ruler at some point and those goroutines never ends properly.
We hit this too in one of our clusters with ruler version 0.23.1 and the same pattern (the increase of number of goroutines over time). I am unfortunately not able to provide pprof, because priority when this was discovered was to mitigate so we restarted all the pods. Could it be however possible that this is caused by similar issue as https://github.com/thanos-io/thanos/pull/4795 ?
@jleloup We didn't encounter these kind of issue with v0.21.1 so I am going to rollback ruler to that version.
v0.23.2 contains the fix for https://github.com/thanos-io/thanos/pull/4795 so I'd suggest trying that out to see whether it helps (:
@GiedriusS thanks for the quick response, quick question on that is that code path executed in ruler mode?
Ruler executes queries using the same /api/v1/query_range API and that API might not return any responses due to https://github.com/thanos-io/thanos/pull/4795. So, I think what happens in this case is that the Prometheus ruler manager continuously still tries to evaluate those alerting/recording rules but because no response is retrieved from Thanos, the memory usage stays more or less the same. :thinking:
In our case
Ruler executes queries using the same /api/v1/query_range API and that API might not return any responses due to #4795. So, I think what happens in this case is that the Prometheus ruler manager continuously still tries to evaluate those alerting/recording rules but because no response is retrieved from Thanos, the memory usage stays more or less the same. π€
That might be what happened in our case, we upgrade all thanos components to v0.23.1 from v0.21.0. We noticed some query performance degradation (at the same time the ruler in one cluster got stuck this way), we downgraded the thanos query instances, but not the ruler instances, and we didn't notice this ruler being stuck in this state until now.
Hello, I think I have the same issue with 0.24. Can other confirm? I also commented on https://github.com/thanos-io/thanos/issues/4924 which may be a duplicate....
Facing in 0.24 as well
Hello π Looks like there was no activity on this issue for the last two months.
Do you mind updating us on the status? Is this still reproducible or needed? If yes, just comment on this PR or push a commit. Thanks! π€
If there will be no activity in the next two weeks, this issue will be closed (we can always reopen an issue if we need!). Alternatively, use remind
command if you wish to be reminded at some point in future.
This issue is still being observed in thanos:v0.24.0