OpenHands icon indicating copy to clipboard operation
OpenHands copied to clipboard

openai.OpenAIError

Open ajeema opened this issue 1 year ago • 7 comments
trafficstars

I have ollama running and am trying to use that as the LLM Devin interacts with. I have set config.toml to:

LLM_API_KEY=""
LLM_MODEL="ollama/llama2"
WORKSPACE_DIR="./workspace"
LLM_BASE_URL="http://localhost:11434"

It continues to return:

    raise OpenAIError(
openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable
INFO:     connection closed

This occurs on the latest version of OpenDevin. It occurs when running any variation of deployment, either together or frontend/backend separately.

When the backend/front end starts, the front end complains about not connecting to websocket as I suspect the front end loads faster than the backend. When I refresh the websocket connects but than disconnects due to the above error. I have run the config, added in the base, changed the config.toml completely, tried overriding the variables in session.py. Nothing seems to work.

Please advise.

When I set the embedding explicitly to local I get the following error:

Oops. Something went wrong: Error condensing thoughts: No healthy deployment available, passed model=gpt-3.5-turbo

It seems it passes gpt-3,5-turbo no matter what is set in config?

ajeema avatar Apr 02 '24 16:04 ajeema

What if you use a dummy key?

bamit99 avatar Apr 02 '24 16:04 bamit99

Same here, was working yesteday. Now I getting the following:

Oops. Something went wrong: OpenAIException - 404 page not found ... Oops. Something went wrong: No healthy deployment available, passed model=gpt-4-0125-preview

My config file : LLM_API_KEY="11111111111111111111" WORKSPACE_DIR="./workspace" LLM_BASE_URL="http://localhost:11434" LLM_MODEL= "ollama/mistral" LLM_EMBEDDING_MODEL="local"

gizbo avatar Apr 02 '24 16:04 gizbo

nothing works, I even did another fresh install. It seems like the config variables are being completely ignored.

ajeema avatar Apr 02 '24 19:04 ajeema

Does it work if you set the model via the UI?

rbren avatar Apr 02 '24 21:04 rbren

No not for me, I'm getting this when trying from the UI : litellm.exceptions.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=llama2

gizbo avatar Apr 03 '24 00:04 gizbo

Similar issue ... Ollama not working. GPT4 worked just fine!

bamit99 avatar Apr 03 '24 14:04 bamit99

This is my first time working with LiteLLM so please take this with a grain of salt, but I think the ollama issue was introduced when we https://github.com/OpenDevin/OpenDevin/pull/501.

According to the docs, the model to the constructor should be passed together with the provider.

I think the solution should be something like I'm trying to do in https://github.com/OpenDevin/OpenDevin/pull/656. Still running into other problems though.

johnnyaug avatar Apr 03 '24 15:04 johnnyaug

@ajeema we've refactored this heavily over the last week. Can you try again with latest and open a new issue if it's still broken?

rbren avatar Apr 09 '24 20:04 rbren