Description
I started a fastapi service using "text-chat-davinci-002-20221122" model, but 30 mins ago it went wrong with message "something went wrong", when I turned back to your code "main.py", it reported the following error, Please help me with it, thanks!
User:
hello
Traceback (most recent call last):
File "/data/lilvxue/repos/ChatGPT/src/revChatGPT/service.py", line 65, in
main()
File "/data/lilvxue/repos/ChatGPT/src/revChatGPT/service.py", line 60, in main
response = chatbot.ask(PROMPT)
File "/data/lilvxue/repos/ChatGPT/src/revChatGPT/Official.py", line 123, in ask
completion = self._get_completion(
File "/data/lilvxue/repos/ChatGPT/src/revChatGPT/Official.py", line 49, in _get_completion
return openai.Completion.create(
File "/data/lilvxue/anaconda3/envs/chatgpt/lib/python3.10/site-packages/openai/api_resources/completion.py", line 25, in create
return super().create(*args, **kwargs)
File "/data/lilvxue/anaconda3/envs/chatgpt/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
File "/data/lilvxue/anaconda3/envs/chatgpt/lib/python3.10/site-packages/openai/api_requestor.py", line 227, in request
resp, got_stream = self._interpret_response(result, stream)
File "/data/lilvxue/anaconda3/envs/chatgpt/lib/python3.10/site-packages/openai/api_requestor.py", line 620, in _interpret_response
self._interpret_response_line(
File "/data/lilvxue/anaconda3/envs/chatgpt/lib/python3.10/site-packages/openai/api_requestor.py", line 680, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 2049 tokens, however you requested 4000 tokens (71 in your prompt; 3929 for the completion). Please reduce your prompt; or completion length.
this is the limitation of chatgpt, the prompt is too large
V2 has been killed at OpenAI's side
This issue is stale because it has been open for 30 days with no activity.