OpenHands
OpenHands copied to clipboard
openai.OpenAIError
I have ollama running and am trying to use that as the LLM Devin interacts with. I have set config.toml to:
LLM_API_KEY=""
LLM_MODEL="ollama/llama2"
WORKSPACE_DIR="./workspace"
LLM_BASE_URL="http://localhost:11434"
It continues to return:
raise OpenAIError(
openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable
INFO: connection closed
This occurs on the latest version of OpenDevin. It occurs when running any variation of deployment, either together or frontend/backend separately.
When the backend/front end starts, the front end complains about not connecting to websocket as I suspect the front end loads faster than the backend. When I refresh the websocket connects but than disconnects due to the above error. I have run the config, added in the base, changed the config.toml completely, tried overriding the variables in session.py. Nothing seems to work.
Please advise.
When I set the embedding explicitly to local I get the following error:
Oops. Something went wrong: Error condensing thoughts: No healthy deployment available, passed model=gpt-3.5-turbo
It seems it passes gpt-3,5-turbo no matter what is set in config?
What if you use a dummy key?
Same here, was working yesteday. Now I getting the following:
Oops. Something went wrong: OpenAIException - 404 page not found ... Oops. Something went wrong: No healthy deployment available, passed model=gpt-4-0125-preview
My config file : LLM_API_KEY="11111111111111111111" WORKSPACE_DIR="./workspace" LLM_BASE_URL="http://localhost:11434" LLM_MODEL= "ollama/mistral" LLM_EMBEDDING_MODEL="local"
nothing works, I even did another fresh install. It seems like the config variables are being completely ignored.
Does it work if you set the model via the UI?
No not for me, I'm getting this when trying from the UI : litellm.exceptions.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=llama2
Similar issue ... Ollama not working. GPT4 worked just fine!
This is my first time working with LiteLLM so please take this with a grain of salt, but I think the ollama issue was introduced when we https://github.com/OpenDevin/OpenDevin/pull/501.
According to the docs, the model to the constructor should be passed together with the provider.
I think the solution should be something like I'm trying to do in https://github.com/OpenDevin/OpenDevin/pull/656. Still running into other problems though.
@ajeema we've refactored this heavily over the last week. Can you try again with latest and open a new issue if it's still broken?