OpenHands icon indicating copy to clipboard operation
OpenHands copied to clipboard

[Bug]: : Duplicate LLM_BASE_URL in config.toml when setting up ollama with llama2

Open isavita opened this issue 9 months ago • 1 comments

Is there an existing issue for the same bug?

  • [X] I have checked the troubleshooting document at https://github.com/OpenDevin/OpenDevin/blob/main/docs/guides/Troubleshooting.md
  • [X] I have checked the existing issues.

Describe the bug

When you go through the steps from make setup-config and choose ollama, you end up with a duplicated config.toml because the LLM_BASE_URL key is set twice.

Current Version

This is from main commit d1f62bb6be6a5300eef9d86a9cf4cd16f3198aa1

Installation and Configuration

I am using local setup with ollama. I wanted to add some other embeddings (e.g. "mxbai-embed-large"). 
However when I start testing firstly to see how "llama2" are used I notice that this doesn't work.

Model and Agent

This is on start with make run

Reproduction Steps

  1. Run make setup-config
  2. Set for LLM Base URL http://localhost:11434
  3. Set for LLM Embedding Model llama2
  4. Run make run

Logs, Errors, Screenshots, and Additional Context

$> make setup-config
Setting up config.toml...
Enter your LLM Model name, used for running without UI. Set the model in the UI after you start the app. (see https://docs.litellm.ai/docs/providers for full list) [default: gpt-3.5-turbo-1106]: ollama/llama3
Enter your LLM API key: ollama
Enter your LLM Base URL [mostly used for local LLMs, leave blank if not needed - example: http://localhost:5001/v1/]: http://localhost:11434
Enter your LLM Embedding Model\nChoices are openai, azureopenai, llama2 or leave blank to default to 'BAAI/bge-small-en-v1.5' via huggingface
> llama2
Enter the local model URL (will overwrite LLM_BASE_URL): http://localhost:11434
Enter your workspace directory [default: ./workspace]: /Users/isavita/code/workspace
Config.toml setup completed.
$> cat config.toml
LLM_MODEL="ollama/llama3"
LLM_API_KEY="ollama"
LLM_BASE_URL="http://localhost:11434"
LLM_EMBEDDING_MODEL="llama2"
LLM_BASE_URL="http://localhost:11434"
WORKSPACE_BASE="/Users/isavita/code/workspace"
$> make run
Running the app...
Starting backend server...
Waiting for the backend to start...
Traceback (most recent call last):
  File "/Users/isavita/anaconda3/envs/opendevin/lib/python3.11/site-packages/toml/decoder.py", line 511, in loads
    ret = decoder.load_line(line, currentlevel, multikey,
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/isavita/anaconda3/envs/opendevin/lib/python3.11/site-packages/toml/decoder.py", line 781, in load_line
    raise ValueError("Duplicate keys!")
ValueError: Duplicate keys!

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/isavita/anaconda3/envs/opendevin/bin/uvicorn", line 8, in <module>
    sys.exit(main())
             ^^^^^^
...

isavita avatar Apr 28 '24 20:04 isavita

I was thinking to add LLM_EMBEDDING_BASE_URL and to use this in the memory.py here. However, I am not sure if you are fine with doing that. I can make small PR for that.

isavita avatar Apr 28 '24 21:04 isavita