LLM_MODEL not updated
Describe the bug
when running make run, frontend and backend both start succesfully. On first message to Devin,
Setup and configuration
Current version: from gitlog -n 1: commit 22a9a28e46f86488d8756db3c074ee35403ba740 (HEAD -> main, origin/main, origin/HEAD)
from cat config.toml: LLM_API_KEY="valid-free-tier-openAI-API key" LLM_MODEL="gpt-3.5-turbo" WORKSPACE_DIR="./workspace"
My model and agent (you can see these settings in the UI):
- Model: gpt-3.5-turbo
- Agent: monologue
Commands I ran to install and run OpenDevin: make build, with changes prescribed in #579 make setup-config with llm api key and model name make run
Steps to Reproduce: fresh install of opendevin with new openai API key (free tier) and use model name gpt-3.5-turbo
OBSERVATION: Error condensing thoughts: No healthy deployment available, passed model=gpt-3.5-turbo Error sending data to client Cannot call "send" once a close message has been sent.
I have a similar issue. My config.toml is in the root of the git clone, which is where I execute 'make ...' from and none of the settings appear to be read from it.
I have the same issue.
Hack solution:
in opendevin/llm/llm.py
class LLM:
def __init__(self,
model=DEFAULT_MODEL_NAME,
api_key=DEFAULT_API_KEY,
base_url=DEFAULT_BASE_URL,
num_retries=DEFAULT_LLM_NUM_RETRIES,
cooldown_time=DEFAULT_LLM_COOLDOWN_TIME,
debug_dir=PROMPT_DEBUG_DIR
):
self.model_name = DEFAULT_MODEL_NAME # model if model else DEFAULT_MODEL_NAME
self.api_key = DEFAULT_API_KEY # api_key if api_key else DEFAULT_API_KEY
self.base_url = DEFAULT_BASE_URL # base_url if base_url else DEFAULT_BASE_URL
The conditional statement is causing the script to overwrite the default values with the ones that are in the frontend ui settings. You can comment out the conditional and force it to use the default which comes from the config file.
@yimothysu could this be related to your recent PR?
https://github.com/OpenDevin/OpenDevin/pull/541
We might want to revert this one. I think correct behavior is:
- If user doesn't specify anything in FE, fall back to config.toml
- Once user specifies something in FE, that becomes the go-forward
Either that, or maybe we stop telling folks to specify model and key in the make instructions, and tell them to set it in the UI
@rbren it would be nice if the user could just type in the model in the frontend. I have been trying to get ollama to work and I had it running last night but some change today has caused it to not work anymore.
I don't know if its related but this config file worked with the older version
LLM_API_KEY="ollama"
LLM_BASE_URL="http://0.0.0.0:11434"
LLM_MODEL="ollama/mistral"
LLM_EMBEDDING_MODEL="local"
WORKSPACE_DIR="./workspace"
uvicorn opendevin.server.listen:app --port 3000 --host 0.0.0.0
npm run start -- --host
For the new version, i added that to the Makefile file
start-backend:
@echo "Starting backend..."
@pipenv run uvicorn opendevin.server.listen:app --port $(BACKEND_PORT) --host 0.0.0.0
# Start frontend
start-frontend:
@echo "Starting frontend..."
@cd frontend && npm run start -- --port $(FRONTEND_PORT) -- --host
using the same config as above. I also tried the following as it was generated by the setup in the beginning.
LLM_API_KEY="ollama"
LLM_MODEL="ollama/mistral"
WORKSPACE_DIR="./workspace"
Getting this error, tried to choose different models in the opendevin gui to see if it makes a difference but no.
OBSERVATION:
Error condensing thoughts: No healthy deployment available, passed model=mistral.mistral-7b-instruct-v0:2
Error condensing thoughts: No healthy deployment available, passed model=gpt-4-0125-preview
Oops. Something went wrong: Error condensing thoughts: BedrockException - AWS region not set: set AWS_REGION_NAME or AWS_REGION env variable or in .env file
Also what I run into in the older version is, that opendevin is just not continuing, i waited like 30 minutes for updates in the chat but there were none and no files generated.
@rbren PR https://github.com/OpenDevin/OpenDevin/pull/541 sends the frontend model/agent/workspace dir to the backend on initialization. The error in the screenshot seems unrelated though because I don't encounter any similar errors on any model.
I think correct behavior is:
* If user doesn't specify anything in FE, fall back to config.toml * Once user specifies something in FE, that becomes the go-forward
Yes. The only change necessary is to send LLM_MODEL from the backend to the frontend on initialization. This way the frontend will have the correct model selected in the settings menu.
I don't think we need to revert because the PR fixes https://github.com/OpenDevin/OpenDevin/issues/500.
The only change necessary is to send LLM_MODEL from the backend to the frontend on initialization.
OK this SGTM--can you open up a separate issue to get that done? Seems like a lot of users are confused
Yes, new issue here: https://github.com/OpenDevin/OpenDevin/issues/617
I can see how it's confusing since LLM_MODEL is effectively not respected with the new frontend initialization.
Either that, or maybe we stop telling folks to specify model and key in the
makeinstructions, and tell them to set it in the UI
This sounds alot more user friendly, if one could set the configs in frontend (including a toggle or base url for running it locally)
This has been fixed and broke again https://github.com/OpenDevin/OpenDevin/issues/793
I will close it here so we can keep it in one place.