aiolimiter icon indicating copy to clipboard operation
aiolimiter copied to clipboard

Does `acquire` wait longer than needed?

Open schoennenbeck opened this issue 1 year ago • 4 comments

In the acquire-method the future waiting for drips has a timeout of 1 / self._rate_per_sec * amount which waits "enough seconds" for "amount" to drip out of the bucket so it makes sense to check again if there might be enough room in the bucket now. However, if the bucket is not completely full isn't this more time than is strictly always needed?

Say my bucket has max_rate = 3, _rate_per_sec=1 and _level=2 while I am requesting amount=2. In this case the timeout would be 2 seconds (waiting for the full amount to drip out of the bucket), but there would be enough room in the bucket after just 1 second (since it wasn't completely full to begin with).

So shouldn't the acquire-method wait for 1 / self._rate_per_sec * (amount - self.max_rate + self._level) instead?

schoennenbeck avatar Feb 13 '24 14:02 schoennenbeck

Here is some sample code that demonstrates the issue:

import asyncio
import time
from aiolimiter import AsyncLimiter

limiter = AsyncLimiter(3, 3)


async def small_task():
    await limiter.acquire(1)
    print(f"Small task completed after {time.time() - ref:.2f}s")


async def big_task():
    await limiter.acquire(3)
    print(f"Big task completed after {time.time() - ref:.2f}s")


ref = time.time()
asyncio.run(asyncio.wait([small_task(), big_task()]))

With the current implementation, the output is:

Small task completed after 0.00s
Big task completed after 3.00s

If we make the suggested change in the acquire-method we get:

Small task completed after 0.00s
Big task completed after 1.00s

schoennenbeck avatar Feb 14 '24 09:02 schoennenbeck

Additional observation: If we are waiting for 1 / self._rate_per_sec * (amount - self.max_rate + self._level) I don't think there is any real need for the _waiters since there cannot be enough capacity for the task any earlier than this.

However, one can still use the waiters to do a first-come-first-served variant of the limiter which ensures that bigger tasks actually get the chance to run even when the limitter is at or near capacity. I have a prototype implementation of this and would be happy to do a PR if there is interest.

schoennenbeck avatar Feb 16 '24 06:02 schoennenbeck

Indeed, you are completely right, that calculation has been wrong for a long time!

I am always interested in improvements, and a PR that would allow for first-come-first-serve style acquisition is something I'd love to see.

mjpieters avatar May 22 '24 16:05 mjpieters

@schoennenbeck did you open the PR ?

mk-armah avatar Aug 27 '24 15:08 mk-armah