cortex icon indicating copy to clipboard operation
cortex copied to clipboard

Distributors OOM on a single slow ingester in the cluster

Open pracucci opened this issue 4 years ago • 12 comments

Yesterday we've got all distributors continuously OOMKilled in one of our Cortex clusters. The root cause analysis outlined this issue has been caused by a single ingester which was running on a failing Kubernetes node which was running but very slow.

This issue is due to how the quorum works. When the distributors receive a Push() request, the time series are sharded and then sent to 3 ingesters (we have a replication factor of 3). The distributor's Push() request completes as soon as all series are pushed to at least 2 ingesters.

In the case of a very slow ingester, the distributor piles up the number of in-flight requests towards the slow ingester, while the inbound Push() request is completed as soon as the other ingesters successfully complete the ingestion.

This causes the memory used by the distributors to increase due to the in-flight requests towards the slow ingester.

In a high traffic Cortex cluster, distributors can hit the memory limit before the timeout of the in-flight requests towards the slow ingester is expired, causing all distributors to be OOMKilled (and subsequent distributors restarts will OOM again until the very slow ingester is not removed from the ring).

pracucci avatar Dec 10 '19 07:12 pracucci

I think this might be caused by https://github.com/cortexproject/cortex/pull/736 - we used to cancel the outstanding request, now we pile them up as you said.

tomwilkie avatar Dec 17 '19 10:12 tomwilkie

#858 talks about a similar situation - we need to limit the number of "backgrounded" requests.

It sounds like a smaller timeout would help.

#736 was done for good reasons, and is essential to the efficiency gain from #1578.

bboreham avatar Dec 17 '19 10:12 bboreham

#736 was done for good reasons, and is essential to the efficiency gain from #1578.

Can you please elaborate on how #1578 is related to #736? Is it to make sure that each ingesters gets exactly the same data, and not only part of it due to parent context being cancelled/timed out?

pstibrany avatar Dec 18 '19 13:12 pstibrany

Yes, if you cancel the 3rd push every time then each ingester will have a random sprinkling of holes in the data, so the checksums won't match.

bboreham avatar Dec 18 '19 13:12 bboreham

BTW I just added a link to a blog post in #1578 that describes the efficiency gains.

bboreham avatar Dec 18 '19 13:12 bboreham

BTW I just added a link to a blog post in #1578 that describes the efficiency gains.

Thanks. I just wanted to make sure I understand it correctly, as I was just adding similar thing to Loki earlier today and hope to see similar benefits. (Loki already uses #736 change, so all is good there).

pstibrany avatar Dec 18 '19 13:12 pstibrany

Today we ran into the same issue which caused an outage of the write path in our prod environment.

  • At first the CPU usage of a single ingester jumped from an expected 25-40% CPU usage to 80-100%.
  • At the same time the RAM usage ramped up to the point where the ingester eventually got OOM killed (under normal conditions the ingester takes 25% of the available RAM). It took just 5 minutes to exceed the RAM limits
  • Even Before the ingester was eventually OOM killed the first distributor pods have been OOM killed. The distributors finally were all constantly restarting because of OOM kills

I am unsure why the cortex ingester was slow at all, but I noticed it always had been the same ingester. I did not see any sign of the underlying Kubernetes node being faulty but I resolved the issue by draining that node, so that a new ingester starts. The problematic ingester failed to leave the ring and therefore I also had to manually forget the ingester. Since then the cluster seems to be stable again.

weeco avatar Feb 25 '20 13:02 weeco

We could count the number of in-flight requests to ingesters and fail (response 5xx) the incoming request when that number goes over a threshold. This would prevent OOM on the distributor.

bboreham avatar Feb 25 '20 17:02 bboreham

Slightly more sophisticated:

Count the number of requests in-flight per ingester. If one of them is over a threshold, treat that ingester as unhealthy and spill the samples to the next one. Thus we don’t 500 back to the caller unless nearly all ingesters are impacted. Also we can expose the per-ingester counts as metrics.

bboreham avatar Feb 26 '20 13:02 bboreham

+1

jakirpatel avatar Aug 31 '21 07:08 jakirpatel

Is there any fix for this bug ?

jakirpatel avatar Aug 31 '21 07:08 jakirpatel

We added -distributor.instance-limits.max-inflight-push-requests and -ingester.instance-limits.max-inflight-push-requests in 1.9.0.

Note that -distributor.instance-limits.max-inflight-push-requests does not address this problem on its own, because it decrements the counter after 2 responses have been received; the 3rd is still active but not counted. But I think setting -ingester.max-concurrent-streams will prevent new calls from starting, so all three together should work as a fix.

bboreham avatar Aug 31 '21 10:08 bboreham

It's very likely that the context for this issue is that a 20s timeout was used, instead of the default 2s https://github.com/cortexproject/cortex-jsonnet/blob/3ff1d4cfcbfa28de1b83c33d42d74749e4c9c97b/cortex/distributor.libsonnet#L16

I experienced the same issues, for years, using 20s as remote-timeout too, the problem was gone when timeouts were reduced back to 2s

It sounds like a smaller timeout would help.

Bryan was right

friedrichg avatar Mar 10 '23 02:03 friedrichg

Fixed on https://github.com/cortexproject/cortex-jsonnet/pull/15

friedrichg avatar Mar 10 '23 02:03 friedrichg