llama_index
llama_index copied to clipboard
rate limit error
INFO:openai:error_code=None error_message='Rate limit reached for default-global-with-image-limits in organization xxxxxx on requests per min. Limit: 60 / min. Current: 70 / min. Contact [email protected] if you continue to have issues. Please add a payment method to your account to increase your rate limit. Visit https://platform.openai.com/account/billing to add a payment method.' error_param=None error_type=requests message='OpenAI API error received' stream_error=False I
is that any solutions?
Is this with the sync API or async API?
For the sync queries, we have retry_on_throttling
enabled by default for LLMPredictor
, so it should retry up to 10 times (by default).
For the async queries, we haven't enabled retry_on_throttling
yet, but it's a quick change.
Hey @Minweiwangaaaa, going to close this issue now.
If you have further issues, please join our discord community (https://discord.com/invite/dGcwcsnxhU), you'd get much better support there.