OpenHands icon indicating copy to clipboard operation
OpenHands copied to clipboard

Error condensing thoughts: No healthy deployment available, passed model=mistral.mistral-7b-instruct-v0:2

Open stratte89 opened this issue 10 months ago • 7 comments

Setup and configuration

Current version:

(base) stratte@stratte-MS-7D08:~/Schreibtisch/AI/OpenDevin$ git log -n 1
fatal: Kein Git-Repository (oder irgendeines der Elternverzeichnisse): .git
Ubuntu

LLM_API_KEY="ollama"
LLM_BASE_URL="http://0.0.0.0:11434"
LLM_MODEL="ollama/mistral"
LLM_EMBEDDING_MODEL="local"
WORKSPACE_DIR="./workspace"

LLM_API_KEY="ollama"
LLM_MODEL="ollama/mistral"
WORKSPACE_DIR="./workspace"

LLM_API_KEY="lm-studio"
LLM_BASE_URL="http://localhost:1234/v1"
LLM_MODEL="openai/stable-code-instruct-3b-GGUF"
LLM_EMBEDDING_MODEL="local"
WORKSPACE_DIR="./workspace"

[2024-04-03 00:57:54.403] [INFO] [LM STUDIO SERVER] Verbose server logs are ENABLED
[2024-04-03 00:57:54.405] [INFO] [LM STUDIO SERVER] Success! HTTP server listening on port 1234
[2024-04-03 00:57:54.405] [INFO] [LM STUDIO SERVER] Supported endpoints:
[2024-04-03 00:57:54.405] [INFO] [LM STUDIO SERVER] ->	GET  http://localhost:1234/v1/models
[2024-04-03 00:57:54.406] [INFO] [LM STUDIO SERVER] ->	POST http://localhost:1234/v1/chat/completions
[2024-04-03 00:57:54.406] [INFO] [LM STUDIO SERVER] ->	POST http://localhost:1234/v1/completions
[2024-04-03 00:57:54.406] [INFO] [LM STUDIO SERVER] Model loaded: bartowski/stable-code-instruct-3b-GGUF/stable-code-instruct-3b-Q8_0.gguf
[2024-04-03 00:57:54.406] [INFO] [LM STUDIO SERVER] Logs are saved into /tmp/lmstudio-server-log.txt

**My model and agent** (you can see these settings in the UI):
* Model: bartowski/stable-code-instruct-3b-GGUF/stable-code-instruct-3b-Q8_0.gguf, mistral
* Agent: LM-Studio, Ollama

**Commands I ran to install and run OpenDevin**:
make build
make setup-config
make start-backend
make start-frontend

**Steps to Reproduce**:
1. run the LM-Studio Server
2. Run Backend
3. Run Frontend
4. Chat with Devin

or for Ollama
1. run ollama run mistral
2. run ollama serve
3. Run Backend
4. Run Frontend
5. Chat with Devin

**Logs, error messages, and screenshots**:
OBSERVATION:
Error condensing thoughts: No healthy deployment available, passed model=mistral.mistral-7b-instruct-v0:2
Error condensing thoughts: No healthy deployment available, passed model=gpt-4-0125-preview
Oops. Something went wrong: Error condensing thoughts: BedrockException - AWS region not set: set AWS_REGION_NAME or AWS_REGION env variable or in .env file

#### Additional Context

You MUST fill out this template. We will close issues that don't include enough information to reproduce
#### Describe the bug

Older Devin Version was connecting to ollama with 

LLM_API_KEY="ollama"
LLM_BASE_URL="http://0.0.0.0:11434"
LLM_MODEL="ollama/mistral"
LLM_EMBEDDING_MODEL="local"
WORKSPACE_DIR="./workspace"

uvicorn opendevin.server.listen:app --port 3000 --host 0.0.0.0
npm run start -- --host

For the new version, i added that to the Makefile file
start-backend:
	@echo "Starting backend..."
	@pipenv run uvicorn opendevin.server.listen:app --port $(BACKEND_PORT) --host 0.0.0.0

# Start frontend
start-frontend:
	@echo "Starting frontend..."
	@cd frontend && npm run start -- --port $(FRONTEND_PORT) -- --host

using the same config as above. I also tried the following as it was generated by the setup in the beginning.

LLM_API_KEY="ollama"
LLM_MODEL="ollama/mistral"
WORKSPACE_DIR="./workspace"

Also what I run into in the older version is, that opendevin is just not continuing, i waited like 30 minutes for updates in the chat but there were none and no files generated.

For LM-Studio I tried the following config, without changes in the makefile

LLM_API_KEY="lm-studio"
LLM_BASE_URL="http://localhost:1234/v1"
LLM_MODEL="openai/stable-code-instruct-3b-GGUF"
LLM_EMBEDDING_MODEL="local"
WORKSPACE_DIR="./workspace"


EDIT: I just installed the newest version

LLM_API_KEY="ollama"
LLM_BASE_URL="http://0.0.0.0:11434"
LLM_MODEL="ollama/llama2:13b"
LLM_EMBEDDING_MODEL="local"
WORKSPACE_DIR="./workspace"


  VITE v5.2.7  ready in 234 ms

  ➜  Local:   http://localhost:3001/
  ➜  Network: http://192.168.178.20:3001/
INFO:     Started server process [139984]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:3000 (Press CTRL+C to quit)

  File "/home/stratte/Schreibtisch/AI/OpenDevin/agenthub/monologue_agent/utils/monologue.py", line 36, in condense
    raise RuntimeError(f"Error condensing thoughts: {e}")
RuntimeError: Error condensing thoughts: No healthy deployment available, passed model=gpt-4-0125-preview

OBSERVATION:
Error condensing thoughts: No healthy deployment available, passed model=gpt-4-0125-preview
Oops. Something went wrong: Error condensing thoughts: No healthy deployment available, passed model=gpt-4-0125-preview

Oops. Something went wrong: Error condensing thoughts: string indices must be integers, not 'str'
Oops. Something went wrong: 'str' object has no attribute 'copy'

EDIT: Im using Ollama with the old version of devin since its working for me but encountered these errors: 

Devin created a folder in Workspace but the folder is locked. This is the output message


Oops. Something went wrong: [Errno 13] Permission denied: 'workspace/game/main.py'
The repeated permission denied errors suggest a misunderstanding in file path handling or an environment restriction in place. Given the environment's constraints, directly creating or modifying files may not be feasible as initially thought. To progress, I need to reconsider the approach. Developing a game in this setting without being able to write files complicates things significantly. An alternative approach could be to outline in detail the steps and logic required to develop the game, including pseudo-code and architecture descriptions, which should be devised keeping in mind the limitations encountered. This way, I can provide a comprehensive guide for creating the game, even if I cannot execute or write the code directly.

stratte89 avatar Apr 02 '24 23:04 stratte89

I have a similar problem. I put the information of my LM-Studio instance when i executed make setupconfig Enter your LLM API key: lm-studio Enter your LLM Model name [default: gpt-4-0125-preview]: TheBloke/Mistral-7B-Instruct-v0.2-GGUF/mistral-7b-instruct-v0.2.Q8_0.gguf

I set both as stated by the LM-Studio server. But i get these two errors: Oops. Something went wrong: OpenAIException - Error code: 401 - {'error': {'message': 'Incorrect API key provided: lm-studio. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}

Oops. Something went wrong: No healthy deployment available, passed model=gpt-4-0125-preview

Zeddesnetos avatar Apr 03 '24 14:04 Zeddesnetos

Same here, trying locally with ollama and with gpt-3.5-turbo too. But it continues picking the gpt-4-0125-preview

Oops. Something went wrong: Error condensing thoughts: No healthy deployment available, passed model=gpt-4-0125-preview

colonha avatar Apr 03 '24 20:04 colonha

Same here, trying locally with ollama and with gpt-3.5-turbo too. But it continues picking the gpt-4-0125-preview

Can you try to update your repository, git pull, make build, and try again? An issue has been found and fixed.

enyst avatar Apr 03 '24 21:04 enyst

I have a similar problem. I put the information of my LM-Studio instance when i executed make setupconfig Enter your LLM API key: lm-studio Enter your LLM Model name [default: gpt-4-0125-preview]: TheBloke/Mistral-7B-Instruct-v0.2-GGUF/mistral-7b-instruct-v0.2.Q8_0.gguf

I set both as stated by the LM-Studio server. But i get these two errors: Oops. Something went wrong: OpenAIException - Error code: 401 - {'error': {'message': 'Incorrect API key provided: lm-studio. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}

Oops. Something went wrong: No healthy deployment available, passed model=gpt-4-0125-preview

As above, please try to update. In addition, though, I'm not sure your setting for model works. If it doesn't work after update, I'd suggest to try LLM_MODEL="openai/Mistral-7B-Instruct-v0.2-GGUF". See https://docs.litellm.ai/docs/providers/openai_compatible

enyst avatar Apr 03 '24 21:04 enyst

As above, please try to update. In addition, though, I'm not sure your setting for model works. If it doesn't work after update, I'd suggest to try LLM_MODEL="openai/Mistral-7B-Instruct-v0.2-GGUF". See https://docs.litellm.ai/docs/providers/openai_compatible

Doesnt work for me. After the update my old way to input the model now gives me a different error:

litellm.exceptions.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=TheBloke/Mistral-7B-Instruct-v0.2-GGUF/mistral-7b-instruct-v0.2.Q8_0.gguf
Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers

and your new method of "openai/Mistral-7B-Instruct-v0.2-GGUF" outputs the same/similar errors i mentioned before with my old method of stating the model. Oops. Something went wrong: Error condensing thoughts: No healthy deployment available, passed model=openai/Mistral-7B-Instruct-v0.2-GGUF

Oops. Something went wrong: OpenAIException - Error code: 401 - {'error': {'message': 'Incorrect API key provided: lm-studio. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}

Zeddesnetos avatar Apr 04 '24 01:04 Zeddesnetos

As above, please try to update. In addition, though, I'm not sure your setting for model works. If it doesn't work after update, I'd suggest to try LLM_MODEL="openai/Mistral-7B-Instruct-v0.2-GGUF". See https://docs.litellm.ai/docs/providers/openai_compatible

Doesnt work for me. After the update my old way to input the model now gives me a different error:

litellm.exceptions.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=TheBloke/Mistral-7B-Instruct-v0.2-GGUF/mistral-7b-instruct-v0.2.Q8_0.gguf
Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers

and your new method of "openai/Mistral-7B-Instruct-v0.2-GGUF" outputs the same/similar errors i mentioned before with my old method of stating the model. Oops. Something went wrong: Error condensing thoughts: No healthy deployment available, passed model=openai/Mistral-7B-Instruct-v0.2-GGUF

Oops. Something went wrong: OpenAIException - Error code: 401 - {'error': {'message': 'Incorrect API key provided: lm-studio. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}

i wasnt able to make lm-studio work but i was able to make it work with oobabooga webui, here is my guide https://github.com/OpenDevin/OpenDevin/commit/08a2dfb01af1aec6743f5e4c23507d63980726c0#commitcomment-140559598

stratte89 avatar Apr 04 '24 01:04 stratte89

I have the same issue with the latest version

WORKSPACE_DIR="./workspace"
LLM_API_KEY="ollama"
LLM_BASE_URL="http://localhost:11434"
LLM_MODEL ='ollama/mistral'

image

Jmzp avatar Apr 04 '24 15:04 Jmzp

I had the same issue and it seems like the config.toml is overwritten by a start_event from the frontend UI in opendevin/server/agent/manager.py create_controller(...).

So this means you have to choose the model from the webui now. Leaving blank doesn't work (as it should according to the code) because instead of it becoming an empty string ("") it becomes "null" and then no model is found at all.

liontariai avatar Apr 07 '24 16:04 liontariai

This should be solved as of https://github.com/OpenDevin/OpenDevin/pull/863

If not let us know!

rbren avatar Apr 07 '24 22:04 rbren