PyrateLimiter icon indicating copy to clipboard operation
PyrateLimiter copied to clipboard

limiter.ratelimit(x, delay=True) in 3.x

Open nonamethanks opened this issue 1 year ago • 7 comments

In 2.x I used to use limiter.ratelimit(key, delay=True) to wait until a request could be performed for the key provided. How do I achieve this in 3.x?

I tried the following, but it does not work, it just ignores any rate limit.

from pyrate_limiter import Limiter, Rate, Duration

limiter = Limiter(
    Rate(3, Duration.SECOND),
    raise_when_fail=False,
)

for i in range(100):
    limiter.try_acquire("test")
    print(i)

nonamethanks avatar Jan 30 '24 16:01 nonamethanks

You can use "max_delay" arguement

vutran1710 avatar Jan 31 '24 02:01 vutran1710

That still doesn't allow me to have independent rate limits for different keys like ratelimit used to have.

nonamethanks avatar Jan 31 '24 10:01 nonamethanks

If you want different delay values for each single key then you probably need to use multiple Limiters. If you just want different rate limits for different keys then you can utilize BucketFactory class & its get() method

vutran1710 avatar Jan 31 '24 10:01 vutran1710

Hi, great work.

I think this is a transition question from 2.x to 3.x and it is hard for me as well. In the past different keys could easily be rated separately, with a single Limiter, even if the rates were shared. Now it seems this requires an instance of BucketFactory to be used, but there is no example, other than the single bucket one, which beats the purpose of the factory.

i'll edit the comment with an example when i sort it out myself. I'm looking to make a redis based bucket factory.

petroslamb avatar Feb 08 '24 23:02 petroslamb

Hi, great work.

I think this is a transition question from 2.x to 3.x and it is hard for me as well. In the past different keys could easily be rated separately, with a single Limiter, even if the rates were shared. Now it seems this requires an instance of BucketFactory to be used, but there is no example, other than the single bucket one, which beats the purpose of the factory.

i'll edit the comment with an example when i sort it out myself. I'm looking to make a redis based bucket factory.

from pyrate_limiter import BucketFactory from pyrate_limiter import AbstractBucket

class MyBucketFactory(BucketFactory): # You can use constructor here, # nor it requires to make bucket-factory work!

def wrap_item(self, name: str, weight: int = 1) -> RateItem:
    """Time-stamping item, return a RateItem"""
    now = clock.now()
    return RateItem(name, now, weight=weight)

def get(self, _item: RateItem) -> AbstractBucket:
    """For simplicity's sake, all items route to the same, single bucket"""
    return bucket

You can have multi buckets by modifying the "get" method

vutran1710 avatar Feb 09 '24 03:02 vutran1710

Hi, thanks for the answer.

I was thinking something along these lines:

subsecond_rate = Rate(1, Duration.SECOND * 0.5)  # subsecond intervals, a new feature, is why i upgraded the library.

rates = [subsecond_rate]

class RedisBucketFactory(BucketFactory):

    def __init__(self, rates: list[Rate], redis_connection: Redis, clock: AbstractClock = TimeClock(), buckets: dict[str, RedisBucket] = None, thread_pool: ThreadPool = None):
        self.rates = rates
        self.redis = redis_connection
        self.clock = clock
        self.buckets = buckets or {}
        self.thread_pool = thread_pool
    
    def wrap_item(self, name: str, weight: int = 1) -> RateItem:
        return RateItem(name, self.clock.now(), weight=weight)
    
    def get(self, item:RateItem) -> RedisBucket:
        if item.name not in self.buckets:
            bucket = RedisBucket.init(self.rates, self.redis, item.name)
            self.schedule_leak(bucket, self.clock)
            self.buckets[item.name] = bucket
        return self.buckets[item.name]
    

factory = RedisBucketFactory(rates, redis_connection)
limiter = Limiter(factory, raise_when_fail=False)

# Limiter is now ready to work!
start_time = time.time()

while not limiter.try_acquire("hello world"):
    pass
print(time.time() - start_time)

while not limiter.try_acquire("hello underworld"):   # Notice that the different key, creates a different bucket like in v2.
    pass
print(time.time() - start_time)

while not limiter.try_acquire("hello world"):
    pass
print(time.time() - start_time)

Output:

8.511543273925781e-05
0.0001881122589111328
0.5002071857452393

I hope i called the schedule_leak properly and also, is there an issue if too many buckets are created dynamically?

petroslamb avatar Feb 11 '24 23:02 petroslamb

Hi, thanks for the answer.

I was thinking something along these lines:

subsecond_rate = Rate(1, Duration.SECOND * 0.5)  # subsecond intervals, a new feature, is why i upgraded the library.

rates = [subsecond_rate]

class RedisBucketFactory(BucketFactory):

    def __init__(self, rates: list[Rate], redis_connection: Redis, clock: AbstractClock = TimeClock(), buckets: dict[str, RedisBucket] = None, thread_pool: ThreadPool = None):
        self.rates = rates
        self.redis = redis_connection
        self.clock = clock
        self.buckets = buckets or {}
        self.thread_pool = thread_pool
    
    def wrap_item(self, name: str, weight: int = 1) -> RateItem:
        return RateItem(name, self.clock.now(), weight=weight)
    
    def get(self, item:RateItem) -> RedisBucket:
        if item.name not in self.buckets:
            bucket = RedisBucket.init(self.rates, self.redis, item.name)
            self.schedule_leak(bucket, self.clock)
            self.buckets[item.name] = bucket
        return self.buckets[item.name]
    

factory = RedisBucketFactory(rates, redis_connection)
limiter = Limiter(factory, raise_when_fail=False)

# Limiter is now ready to work!
start_time = time.time()

while not limiter.try_acquire("hello world"):
    pass
print(time.time() - start_time)

while not limiter.try_acquire("hello underworld"):   # Notice that the different key, creates a different bucket like in v2.
    pass
print(time.time() - start_time)

while not limiter.try_acquire("hello world"):
    pass
print(time.time() - start_time)

Output:

8.511543273925781e-05
0.0001881122589111328
0.5002071857452393

I hope i called the schedule_leak properly and also, is there an issue if too many buckets are created dynamically?

Yeah you got it right. And to my knowledge, if you are either using in-memory or redis-based back-end, it is fine to create as many bucket as you want - as long as the memory can hold them. Of course you should consider your hardware resource and it is always better to estimate how much memory you need beforehand

vutran1710 avatar Feb 29 '24 12:02 vutran1710