greptimedb
greptimedb copied to clipboard
Rate limit on object store
What problem does the new feature solve?
Remote object store vendors (like S3) all tend to have requests limit control. We should adapt that to avoid foreseeable errors and gain more predictable throughput.
What does the feature do?
Introducing rate limit on our object store layer (in opendal or on top of it maybe).
Implementation challenges
What to do if object store requests limit is exceeded? I could image a backpressure on client's requests. Also our data flushing strategy might have to be changed accordingly.
@Xuanwo I guess OpenDAL provides this function out-of-the-box? It seems should be in OpenDAL's scope.
@MichaelScofield do all our access to object store fall into OpenDAL? I wonder if we have other codepaths access object store without OpenDAL ..
yes to the best of my knowledge
@MichaelScofield OpenDAL has a ThrottleLayer
we can leverage. But here are some issues:
- It supports throttle writes, not reads. Is it desired for our case?
- How should we handle rate limit error? OpenDAL has an
ErrorKind::RateLimited
to identify this kind of error. - How should we expose the config option, or make it internally configured? The object store is deep in the call stack.
A general solution for handling rate limit errors is letting clients backoff waiting.