OpenHands icon indicating copy to clipboard operation
OpenHands copied to clipboard

Opendevin with local model ollama

Open arywidos opened this issue 10 months ago • 9 comments

What problem or use case are you trying to solve?

I changed the config.toml to the below (ollama); I refer to the readme section, Picking a Model.

LLM_API_KEY="11111111111111111111" WORKSPACE_DIR="workspace" LLM_BASE_URL="http://localhost:11434" LLM_MODEL= "gemma:instruct" LLM_EMBEDDING_MODEL="local"

But after restarting everything, OpenDevin is still complaining about invalid OpenAI keys. seems ignoring config.toml Where could I possibly be wrong?

raise self._make_status_error_from_response(err.response) from None openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-PJNWr***************************************evSl. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}

Describe the UX of the solution you'd like

Do you have thoughts on the technical implementation?

Describe alternatives you've considered

Additional context

arywidos avatar Mar 31 '24 15:03 arywidos

@arywidos did you try to use ollama/gemma instead of gemma in your LLM_MODEL?

sercanerhan avatar Mar 31 '24 16:03 sercanerhan

@arywidos did you try to use ollama/gemma instead of gemma in your LLM_MODEL?

Hi Sercanerhan, I tried everything in combination to make it work with Olama llama2, so the last configuration are:

LLM_API_KEY="11111111111111111111" WORKSPACE_DIR="workspace" LLM_BASE_URL="http://localhost:11434" LLM_MODEL= "llama2" LLM_EMBEDDING_MODEL="llama2"

and

LLM_API_KEY="11111111111111111111" WORKSPACE_DIR="workspace" LLM_BASE_URL="http://localhost:11434" LLM_MODEL= "ollama/llama2" LLM_EMBEDDING_MODEL="local"

and

LLM_API_KEY="11111111111111111111" WORKSPACE_DIR="workspace" LLM_BASE_URL="http://localhost:11434" LLM_MODEL= "ollama/llama2" LLM_EMBEDDING_MODEL="llama2"

All are still getting the same error. Could you advise? 

Oops. Something went wrong: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=llama2 Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: https://docs.litellm.ai/docs/providers

from the frontend UI I select model: llama2.

arywidos avatar Mar 31 '24 17:03 arywidos

not sure but to double check it:

  1. run ollama servein a separate terminal to start the ollama
  2. use the configuration on your very first post but change LLM_EMBEDDING_MODEL="local"

lastly, I'm not sure WORKSPACE_DIR="workspace" is a valid path or not depending on your configuration.

sercanerhan avatar Mar 31 '24 17:03 sercanerhan

I cant get it to work either

WORKSPACE_DIR="./workspace"

ollama - https://docs.litellm.ai/docs/providers/ollama

LLM_API_KEY="11111111111111111111" LLM_MODEL ='ollama/codellama' LLM_EMBEDDING_MODEL="local" LLM_BASE_URL="http://localhost:11434"

aSocialMenace avatar Mar 31 '24 17:03 aSocialMenace

What version of ollama are you using?

nickovaras avatar Mar 31 '24 18:03 nickovaras

I get it to "work" but I often get this type of response

Oops. Something went wrong: Expecting value: line 1 column 12 (char 11)

ajeema avatar Mar 31 '24 21:03 ajeema

I dont think you can add this to the Config.Toml

PierrunoYT avatar Mar 31 '24 21:03 PierrunoYT

This comfig.toml works for me (i.e. it runs), but I also get frequent error responses while it asks itself circular questions (seems normal for this alpha?)

LLM_API_KEY="na"
WORKSPACE_DIR="./workspace" # i also created this dir called workspace
LLM_BASE_URL="http://localhost:11434"
LLM_MODEL="ollama/llama2"
LLM_EMBEDDING_MODEL="local" # can be "llama2", "openai", "azureopenai", or "local"

Also make sure ollama is the latest version

dfsm avatar Mar 31 '24 22:03 dfsm

I am getting the error in a loop

"Oops. Something went wrong: Expecting value: line 1 column 12 (char 11)"

I uninstalled orjson lib and reinstalled it to make sure I am not seeing JSON decode errors.

orjson issue (MacOS)

  • pip uninstall orjson
  • pip install --no-cache-dir --only-binary :all: orjson

Still seeing the issue.

seshakiran avatar Mar 31 '24 23:03 seshakiran

With the newest OpenDevin, and docker image the error is now ripped off using the below config.toml.

LLM_API_KEY="11111111111111111111" WORKSPACE_DIR="workspace" LLM_BASE_URL="http://localhost:11434" LLM_MODEL= "ollama/llama2" LLM_EMBEDDING_MODEL="llama2"

You may close this thanks

arywidos avatar Apr 01 '24 03:04 arywidos