OpenHands
OpenHands copied to clipboard
Opendevin with local model ollama
What problem or use case are you trying to solve?
I changed the config.toml to the below (ollama); I refer to the readme section, Picking a Model.
LLM_API_KEY="11111111111111111111" WORKSPACE_DIR="workspace" LLM_BASE_URL="http://localhost:11434" LLM_MODEL= "gemma:instruct" LLM_EMBEDDING_MODEL="local"
But after restarting everything, OpenDevin is still complaining about invalid OpenAI keys. seems ignoring config.toml Where could I possibly be wrong?
raise self._make_status_error_from_response(err.response) from None openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-PJNWr***************************************evSl. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
Describe the UX of the solution you'd like
Do you have thoughts on the technical implementation?
Describe alternatives you've considered
Additional context
@arywidos did you try to use ollama/gemma
instead of gemma
in your LLM_MODEL
?
@arywidos did you try to use
ollama/gemma
instead ofgemma
in yourLLM_MODEL
?
Hi Sercanerhan, I tried everything in combination to make it work with Olama llama2, so the last configuration are:
LLM_API_KEY="11111111111111111111" WORKSPACE_DIR="workspace" LLM_BASE_URL="http://localhost:11434" LLM_MODEL= "llama2" LLM_EMBEDDING_MODEL="llama2"
and
LLM_API_KEY="11111111111111111111" WORKSPACE_DIR="workspace" LLM_BASE_URL="http://localhost:11434" LLM_MODEL= "ollama/llama2" LLM_EMBEDDING_MODEL="local"
and
LLM_API_KEY="11111111111111111111" WORKSPACE_DIR="workspace" LLM_BASE_URL="http://localhost:11434" LLM_MODEL= "ollama/llama2" LLM_EMBEDDING_MODEL="llama2"
All are still getting the same error. Could you advise?Â
Oops. Something went wrong: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=llama2 Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providers
from the frontend UI I select model: llama2.
not sure but to double check it:
- run
ollama serve
in a separate terminal to start the ollama - use the configuration on your very first post but change
LLM_EMBEDDING_MODEL="local"
lastly, I'm not sure WORKSPACE_DIR="workspace"
is a valid path or not depending on your configuration.
I cant get it to work either
WORKSPACE_DIR="./workspace"
ollama - https://docs.litellm.ai/docs/providers/ollama
LLM_API_KEY="11111111111111111111" LLM_MODEL ='ollama/codellama' LLM_EMBEDDING_MODEL="local" LLM_BASE_URL="http://localhost:11434"
What version of ollama are you using?
I get it to "work" but I often get this type of response
Oops. Something went wrong: Expecting value: line 1 column 12 (char 11)
I dont think you can add this to the Config.Toml
This comfig.toml
works for me (i.e. it runs), but I also get frequent error responses while it asks itself circular questions (seems normal for this alpha?)
LLM_API_KEY="na"
WORKSPACE_DIR="./workspace" # i also created this dir called workspace
LLM_BASE_URL="http://localhost:11434"
LLM_MODEL="ollama/llama2"
LLM_EMBEDDING_MODEL="local" # can be "llama2", "openai", "azureopenai", or "local"
Also make sure ollama is the latest version
I am getting the error in a loop
"Oops. Something went wrong: Expecting value: line 1 column 12 (char 11)"
I uninstalled orjson lib and reinstalled it to make sure I am not seeing JSON decode errors.
orjson issue (MacOS)
- pip uninstall orjson
- pip install --no-cache-dir --only-binary :all: orjson
Still seeing the issue.
With the newest OpenDevin, and docker image the error is now ripped off using the below config.toml.
LLM_API_KEY="11111111111111111111" WORKSPACE_DIR="workspace" LLM_BASE_URL="http://localhost:11434" LLM_MODEL= "ollama/llama2" LLM_EMBEDDING_MODEL="llama2"
You may close this thanks