bull icon indicating copy to clipboard operation
bull copied to clipboard

Question about rate limiting

Open matthiasfeist opened this issue 6 years ago • 12 comments

Hi!

I have bull running a queue on 2 machines with the same redis cluster which is working fine. Both machines take a job from the queue and process it correctly. Since these jobs call some rate limited APIs I'd like to configure the rate limit in bull.

I'm not really certain however if the rate limit is a global setting or would only apply for one machine. So in my case if the external API could do max 60 requests per minute and I have 2 machines processing the jobs, do I need to set the rate limit to 60/m since it's global per redis server or to 30/m since I'm having 2 machines? This is not entirely clear to me from the docs.

Thanks for the help.

matthiasfeist avatar Dec 06 '18 22:12 matthiasfeist

The limit is global, independently of the amount of workers you have.

On 6 Dec 2018, at 23:00, Matthias Feist [email protected] wrote:

Hi!

I have bull running a queue on 2 machines with the same redis cluster which is working fine. Both machines take a job from the queue and process it correctly. Since these jobs call some rate limited APIs I'd like to configure the rate limit in bull.

I'm not really certain however if the rate limit is a global setting or would only apply for one machine. So in my case if the external API could do max 60 requests per minute and I have 2 machines processing the jobs, do I need to set the rate limit to 60/m since it's global per redis server or to 30/m since I'm having 2 machines? This is not entirely clear to me from the docs.

Thanks for the help.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread.

manast avatar Dec 06 '18 22:12 manast

Cool. So that means it is determined with the jobs in Redis and not by something in the Node code that is held in memory. Would that be something that should be added to the docs?

matthiasfeist avatar Dec 11 '18 12:12 matthiasfeist

I'd like to chain on to this, as I was wondering roughly the same. I'm currently querying an API that has a rate-limit of 400 requests per IP. I have multiple IPs. Is there any way to apply the rate limit on a per-worker basis?

Cyberuben avatar Jan 10 '19 23:01 Cyberuben

Same here, would be interesting if this is possible with bull.

sambP avatar Dec 31 '21 18:12 sambP

@sambP depending on your needs maybe the "groupKey" option does what you need: https://github.com/OptimalBits/bull/blob/develop/REFERENCE.md#queue However, we have a much better solution on the Pro version of BullMQ: https://docs.bullmq.io/bullmq-pro/groups/rate-limiting

manast avatar Jan 10 '22 06:01 manast

@manast thanks for the tip. I didn't know that there is a pro version. Not related to that topic here, but there is no way to apply different rate limits per group, or is it?

I don't think that groups can help me here. We have to deal with an IP rate limited thrid party API. Our current solution is to run x amount of small ec2 instances with public assigned IPs. Each instance plays a worker for a seperate queue with the IP limit applied. A custom middleware distributes new jobs evenly across all queues/instances/ips based on the waiting amount. I don't think that there is a better native way of doing it?

sambP avatar Jan 13 '22 12:01 sambP

@manast thanks for the tip. I didn't know that there is a pro version. Not related to that topic here, but there is no way to apply different rate limits per group, or is it?

Currently, you can only use the same rate limit for all the groups, but it wouldn't not be impossible to implement a different rate per group in the future.

I don't think that groups can help me here. We have to deal with an IP rate limited thrid party API. Our current solution is to run x amount of small ec2 instances with public assigned IPs. Each instance plays a worker for a seperate queue with the IP limit applied. A custom middleware distributes new jobs evenly across all queues/instances/ips based on the waiting amount. I don't think that there is a better native way of doing it?

I see, this is an interesting use case I haven't thought about. So basically what you would need in this case it is a rate limit at the worker level (assuming there is only one worker per IP address). This should be actually easier to implement than our current distributed rate limiter...

manast avatar Jan 17 '22 05:01 manast

Yeah absolutely. A rate limit per worker would be extremely helpful. We could completely remove our custom logic.

sambP avatar Jan 17 '22 10:01 sambP

So basically what you would need in this case it is a rate limit at the worker level (assuming there is only one worker per IP address).

@manast do you think that this feature will be added to the roadmap in the future? Is there any way for prioritising it?

sambP avatar Jul 17 '22 10:07 sambP

We do not have plans to add new features to Bull, we are only maintaining it fixing bugs. For BullMQ there would be higher chances but for now you are the only user requesting this feature so it is difficult for us to give it high priority right now.

manast avatar Jul 17 '22 13:07 manast

I would also love this feature, not sure how else to handle a lot of cpu intensive work without it all dogpilling into one wotker

MattInternet avatar Aug 19 '22 19:08 MattInternet

@MattInternet which one of the features discussed in this thread?

manast avatar Aug 21 '22 20:08 manast