gpt-engineer
gpt-engineer copied to clipboard
Rate Limits
I am running into OpenAI rate limits. Will you add a way for us to manually set rate limits (RPM and TPM) based on our plans?
Traceback (most recent call last):
File "
File "
File "/Users/ewimsatt/scripts/gpt-engineer/gpt_engineer/main.py", line 49, in
File "/Users/ewimsatt/scripts/gpt-engineer/gpt_engineer/main.py", line 45, in chat messages = step(ai, dbs) ^^^^^^^^^^^^^
File "/Users/ewimsatt/scripts/gpt-engineer/gpt_engineer/steps.py", line 64, in gen_spec messages = ai.next(messages, dbs.identity["spec"]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ewimsatt/scripts/gpt-engineer/gpt_engineer/ai.py", line 34, in next response = openai.ChatCompletion.create( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ewimsatt/anaconda3/envs/gpt-end/lib/python3.11/site-packages/openai/api_resources/chat_completion.py", line 25, in create return super().create(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ewimsatt/anaconda3/envs/gpt-end/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create response, _, api_key = requestor.request( ^^^^^^^^^^^^^^^^^^
File "/Users/ewimsatt/anaconda3/envs/gpt-end/lib/python3.11/site-packages/openai/api_requestor.py", line 298, in request resp, got_stream = self._interpret_response(result, stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ewimsatt/anaconda3/envs/gpt-end/lib/python3.11/site-packages/openai/api_requestor.py", line 700, in _interpret_response self._interpret_response_line(
File "/Users/ewimsatt/anaconda3/envs/gpt-end/lib/python3.11/site-packages/openai/api_requestor.py", line 763, in _interpret_response_line raise self.handle_error_response(
openai.error.RateLimitError: You exceeded your current quota, please check your plan and billing details.
"openai.error.RateLimitError: You exceeded your current quota, please check your plan and billing details.”
Here you need to increase the hard limit: https://platform.openai.com/account/billing/limits
Better to mod the code with while true try catch than set up the rate limit plan as plans varies
I am facing the same issue here.
If you wish to reopen the issue please do following the new issue template.
Hey @ewimsatt @Abodivic
try wrapping the openai base with reliableGPT - it'll handle model switching, in case any one gets rate limited by OpenAI (you can customize this as well).
from reliablegpt import reliableGPT
openai.ChatCompletion.create = reliableGPT(openai.ChatCompletion.create, user_email='[email protected]')
Source: https://github.com/BerriAI/reliableGPT
hello @krrishdholakia Can you help me with reliableGpt?