distilabel
distilabel copied to clipboard
[FEATURE] Add delay parameter to GeneratorTask
Is your feature request related to a problem? Please describe. When running a TextGeneration task on a big dataset using the OpenAI API, I'm getting the following error:
openai.RateLimitError: Error code: 429 - {'error': {'message': 'Rate limit reached for gpt-4o-mini in organization org-X on requests per min (RPM): Limit 500, Used 500, Requested 1. Please try again in 120ms. Visit https://platform.openai.com/account/rate-limits to learn more.','type': 'requests', 'param': None, 'code': 'rate_limit_exceeded'}}
Describe the solution you'd like
I want a batch_delay
or generation_batch_delay
parameter that separates each batch by a specified time (in ms).
Describe alternatives you've considered Another option would be to implement a load balancer logic for multiple keys to avoid the rate limit reached error and keep the generation speed.
Additional context None