shell_gpt
shell_gpt copied to clipboard
HTTPError: 429 Client Error: Too Many Requests for url: https://api.openai.com/v1/completions
After adding my API secret and making one request, I get this error message. I tried the following command:
sgpt "nginx default config file location"
Usually it happens because of OpenAI API limitations. You can read more about request limits here You can track your usage in the dashboard Note that limitation applies on both OpenAI's playground and this tools which is using their API. Also this can happen during high traffic times on OpenAI side, more details here
From OpenAI documentation:
CODE | OVERVIEW |
---|---|
429 - You exceeded your current quota, please check your plan and billing details | Cause: You have hit your maximum monthly spend (hard limit) which you can view in the account billing section. Solution: Apply for a quota increase. |
429 - Rate limit reached for requests | Cause: You are sending requests too quickly. Solution: Pace your requests. Read the Rate limit guide. |
429 - The engine is currently overloaded, please try again later | Cause: Our servers are experiencing high traffic. Solution: Please retry your requests after a brief wait. |
Duplicate https://github.com/TheR1D/shell_gpt/issues/13
I can confirm that 429 - Rate limit reached
for requests is not always due to a local rate limit being hit as I opened my machine this morning, ran the command and got the status code immediately. I would also hazard a guess that this is still an issue predominantly on openAI's side due to traffic however.
Thanks! I think it's the limit; I have to switch to paid API account. Thanks.
I'm reopening the issue to move it to the Open Issues section, so that others can see it before creating duplicates.
Do you think changing the token request amount could help? I ran into the same problem. I reset my API key, and set my --maxtokens to 1024 just to be safe
You can try different model, for example --model curie
. You don't have to specify max tokens, open ai will count only tokens which is in final output + your prompt.