gpt-researcher
gpt-researcher copied to clipboard
openai.error.InvalidRequestError: The model: `gpt-4` does not exist
I thought this will work with GPT-3.5 and also thought that OpenAI made it possible to anyone to use GPT-4 API but after several hours tinkering with this I am not able to get it to work, I see the webpage but it keeps throwing an error after another (fixed them all) but I am unable to make it work because of this error
I have the same problem.
Hi! I’m one of the founders of Sweep, a github app that solves issues(like small bugs) by writing pull requests. This looks like a good issue for Sweep https://github.com/sweepai/sweep to try. It might need more details from the maintainers though. We have onboarding instructions here, I’m also happy to help you onboard directly :)
I have the same error "openai.error.InvalidRequestError: The model: gpt-4
does not exist" just stopped.
The model is configured here
https://github.com/assafelovic/gpt-researcher/blob/master/config/config.py#L25
By accessing an enviroment variable
self.smart_llm_model = os.getenv("SMART_LLM_MODEL", "gpt-4")
If you are having issues with access to gpt-4
, you can set an environment variable with a model name that you do have access to e.g. gpt-3.5-turbo
For example:
export SMART_LLM_MODEL="gpt-3.5-turbo"
Hi!
By making the following modifications, I was able to resolve the issue:
- In gpt-researcher/config/config.py, substitute in line 25 "gpt-4" with "gpt-3.5-turbo-16k"
- In gpt-researcher/config/config.py, substitute in line 27: 8000 with 4000
Upon executing this modified script, I encountered another error: "json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 58)". So, if you also get the same error, all you need to do is:
In gpt-researcher/agent/research_agent.py line 92, substitute "return json.loads(result)" with " return result". This avoids the double-parsing as result itself is a JSON Object.
Hope this helps!
@madiha1ahmed I did it your way and it worked. thanks!
It works for me now. Thanks