cortex icon indicating copy to clipboard operation
cortex copied to clipboard

Rate limiting for S3 compatible block storage

Open jakubgs opened this issue 1 year ago • 10 comments

Describe the bug We are using DigitalOcean Spaces which is S3-compatible storage solution for storing metrics. This service limits number of GET requests one can make to 800 per second. In situations where cache is full we have seen errors like these:

ts=2024-03-18T21:42:10.696053983Z caller=bucket_client.go:135 level=error
msg="bucket operation fail after retries" err="503 Slow Down"
operation="GetRange fake/01HRSSQ403WA1RD7WX20X7E9KX/index (off: 113583688, length: 6568)"

Which means the limit of 800 requests per second has been reached.

Expected behavior According to DigitalOcean support the correct behavior would be something like this:

you can pause for 0.5s-1s after sending 200-300 requests which will surely help with this particular limit

The question is, would it make more sense to rate-limit requests being made to block storage rather than hit the limit and have to back off from making reqests for longer? Or is hitting the backoff the correct and simpler way to handle this?

jakubgs avatar Mar 20 '24 11:03 jakubgs

Thanks for reporting the issue.

The question is, would it make more sense to rate-limit requests being made to block storage rather than hit the limit and have to back off from making reqests for longer? Or is hitting the backoff the correct and simpler way to handle this?

I understand that the current behavior is not ideal. However, this is not an easy problem to solve since Cortex has multiple microservices and multiple replicas sending requests to the object storage at the same time. Thus, it is pretty hard to do rate limiting at client side since what you actually need is a global rate limiter across all your Cortex pods.

From the error log provided, did you hit the rate limit from Store Gateway or other components? For most of the components I believe backoff and retry should be fine since they are not that latency sensitive.

yeya24 avatar Mar 25 '24 10:03 yeya24

@jakubgs if you are using the mixin for cortex, there is a dashboard for object storage that shows which component is making the requests.

Something like this: Screenshot 2024-03-25 at 10 32 33 PM Along with a error/rate dashboards, etc.

if you have it, I would like to see them to understand what components and what operations are getting errors.

friedrichg avatar Mar 25 '24 21:03 friedrichg

From the error log provided, did you hit the rate limit from Store Gateway or other components? For most of the components I believe backoff and retry should be fine since they are not that latency sensitive.

That's correct, the log is from a host running 3 services as one node: querier, compactor, store-gateway.

@jakubgs if you are using the mixin for cortex, there is a dashboard for object storage that shows which component is making the requests.

Sorry, I don't know what "mixin" is in this context.

jakubgs avatar Mar 26 '24 08:03 jakubgs

Sorry, I don't know what "mixin" is in this context.

the cortex mixin contains dashboards and alerts, you can find it the latest in https://github.com/cortexproject/cortex-jsonnet/releases

friedrichg avatar Mar 26 '24 15:03 friedrichg

Oh, no, I have my own dashboard. What is the metric name?

jakubgs avatar Mar 26 '24 15:03 jakubgs

https://github.com/cortexproject/cortex-jsonnet/blob/main/cortex-mixin/dashboards/object-store.libsonnet

^ look in there

friedrichg avatar Mar 26 '24 16:03 friedrichg

There's not much happening honestly:

image

And yet I see the errors in the query node logs(querier,compactor,store-gateway):

[email protected]:~ % j --since '1 day ago' -ocat -u cortex --grep '503 Slow Down' | wc -l
859

jakubgs avatar Mar 27 '24 08:03 jakubgs

I can see an interesting spike last month:

image

But that's not really relevant since I still see errors today.

jakubgs avatar Mar 27 '24 08:03 jakubgs

Actually there's 859 errors in the last hour:

[email protected]:~ % j -ocat -u cortex --since '1 hour ago' --grep '503 Slow Down' | wc -l
859

But the graph shows a low number of requests:

image

Seems wrong.

jakubgs avatar Mar 27 '24 08:03 jakubgs

thanks for sharing, looks like you don't have that many requests, to be honest. The ones that are concerning are the querier and store-gateway errors.

To reduce queries to block-storage make sure you have:

  • bucket index enabled
  • Enough caching configured. Cortex can use 4 types of caches, you want all 4 enabled.

friedrichg avatar Mar 27 '24 16:03 friedrichg

This issue has been automatically marked as stale because it has not had any activity in the past 60 days. It will be closed in 15 days if no further activity occurs. Thank you for your contributions.

stale[bot] avatar Apr 26 '25 18:04 stale[bot]

I think the answer about utilizing as many caches as possible is a good one.

jakubgs avatar Apr 27 '25 05:04 jakubgs