shell_gpt icon indicating copy to clipboard operation
shell_gpt copied to clipboard

HTTPError: 429 Client Error: Too Many Requests for url: https://api.openai.com/v1/completions

Open designium opened this issue 2 years ago • 4 comments

After adding my API secret and making one request, I get this error message. I tried the following command:

sgpt "nginx default config file location"

designium avatar Feb 14 '23 02:02 designium

Usually it happens because of OpenAI API limitations. You can read more about request limits here You can track your usage in the dashboard Note that limitation applies on both OpenAI's playground and this tools which is using their API. Also this can happen during high traffic times on OpenAI side, more details here

From OpenAI documentation:

CODE OVERVIEW
429 - You exceeded your current quota, please check your plan and billing details Cause: You have hit your maximum monthly spend (hard limit) which you can view in the account billing section. Solution: Apply for a quota increase.
429 - Rate limit reached for requests Cause: You are sending requests too quickly. Solution: Pace your requests. Read the Rate limit guide.
429 - The engine is currently overloaded, please try again later Cause: Our servers are experiencing high traffic. Solution: Please retry your requests after a brief wait.

Duplicate https://github.com/TheR1D/shell_gpt/issues/13

TheR1D avatar Feb 14 '23 06:02 TheR1D

I can confirm that 429 - Rate limit reached for requests is not always due to a local rate limit being hit as I opened my machine this morning, ran the command and got the status code immediately. I would also hazard a guess that this is still an issue predominantly on openAI's side due to traffic however.

Bide-UK avatar Feb 14 '23 14:02 Bide-UK

Thanks! I think it's the limit; I have to switch to paid API account. Thanks.

designium avatar Feb 14 '23 18:02 designium

I'm reopening the issue to move it to the Open Issues section, so that others can see it before creating duplicates.

TheR1D avatar Feb 15 '23 22:02 TheR1D

Do you think changing the token request amount could help? I ran into the same problem. I reset my API key, and set my --maxtokens to 1024 just to be safe

kid-gorgeous avatar Feb 21 '23 20:02 kid-gorgeous

You can try different model, for example --model curie. You don't have to specify max tokens, open ai will count only tokens which is in final output + your prompt.

TheR1D avatar Feb 21 '23 20:02 TheR1D