greptimedb icon indicating copy to clipboard operation
greptimedb copied to clipboard

Rate limit on object store

Open MichaelScofield opened this issue 1 year ago • 4 comments

What problem does the new feature solve?

Remote object store vendors (like S3) all tend to have requests limit control. We should adapt that to avoid foreseeable errors and gain more predictable throughput.

What does the feature do?

Introducing rate limit on our object store layer (in opendal or on top of it maybe).

Implementation challenges

What to do if object store requests limit is exceeded? I could image a backpressure on client's requests. Also our data flushing strategy might have to be changed accordingly.

MichaelScofield avatar Jul 24 '23 08:07 MichaelScofield

@Xuanwo I guess OpenDAL provides this function out-of-the-box? It seems should be in OpenDAL's scope.

@MichaelScofield do all our access to object store fall into OpenDAL? I wonder if we have other codepaths access object store without OpenDAL ..

tisonkun avatar Mar 26 '24 00:03 tisonkun

yes to the best of my knowledge

MichaelScofield avatar Mar 26 '24 01:03 MichaelScofield

@MichaelScofield OpenDAL has a ThrottleLayer we can leverage. But here are some issues:

  1. It supports throttle writes, not reads. Is it desired for our case?
  2. How should we handle rate limit error? OpenDAL has an ErrorKind::RateLimited to identify this kind of error.
  3. How should we expose the config option, or make it internally configured? The object store is deep in the call stack.

tisonkun avatar Mar 27 '24 08:03 tisonkun

A general solution for handling rate limit errors is letting clients backoff waiting.

tisonkun avatar Mar 27 '24 08:03 tisonkun