How to use local ollama model when install OpenDevin by docker?
Is there an existing issue for the same bug?
- [X] I have checked the troubleshooting document at https://github.com/OpenDevin/OpenDevin/blob/main/docs/guides/Troubleshooting.md
- [X] I have checked the existing issues.
Describe the bug
Install OpenDevin by README.md. I want to use local ollama model(mistral:latest) . but it does not work. the log shows that it only use gpt-3.5-turbo.
Current Version
ghcr.io/opendevin/opendevin:main
Installation and Configuration
step 1:
export LLM_API_KEY="ollama"
export LLM_MODEL="ollama/mistral:latest"
export LLM_EMBEDDING_MODEL="local"
export WORKSPACE_DIR="/data/workplace"
setp 2:
docker run -e LLM_API_KEY -e WORKSPACE_MOUNT_PATH=$WORKSPACE_DIR -e SANDBOX_TYPE="exec" -v $WORKSPACE_DIR:/opt/workspace_base -v /var/run/docker.sock:/var/run/docker.sock -p 3000:3000 ghcr.io/opendevin/opendevin:main
Model and Agent
mistral:latest
Reproduction Steps
setp1: export LLM_API_KEY="ollama" export LLM_MODEL="ollama/mistral:latest" export LLM_EMBEDDING_MODEL="local" export WORKSPACE_DIR="/data/workplace"
setp 2: docker run -e LLM_API_KEY -e WORKSPACE_MOUNT_PATH=$WORKSPACE_DIR -e SANDBOX_TYPE="exec" -v $WORKSPACE_DIR:/opt/workspace_base -v /var/run/docker.sock:/var/run/docker.sock -p 3000:3000 ghcr.io/opendevin/opendevin:main
Logs, Errors, Screenshots, and Additional Context
INFO: Started server process [1] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://0.0.0.0:3000 (Press CTRL+C to quit) INFO: {ip}:60977 - "GET / HTTP/1.1" 307 Temporary Redirect INFO: {ip}:60979 - "GET /index.html HTTP/1.1" 200 OK INFO: {ip}:60980 - "GET /assets/index-D9MRiuwU.js HTTP/1.1" 200 OK INFO: {ip}:60981 - "GET /assets/index-CVqJeQJv.css HTTP/1.1" 200 OK 09:02:55 - opendevin:ERROR: auth.py:31 - Invalid token 09:02:55 - opendevin:INFO: listen.py:75 - Invalid or missing credentials, generating new session ID: ad054cb0-971c-4465-aa06-548d8eb64d99 INFO: {ip}:60982 - "GET /api/auth HTTP/1.1" 200 OK INFO: {ip}:60983 - "GET /locales/en/translation.json HTTP/1.1" 200 OK INFO: {ip}:60984 - "GET /locales/zh-CN/translation.json HTTP/1.1" 200 OK INFO: {ip}:60985 - "GET /locales/zh/translation.json HTTP/1.1" 404 Not Found INFO: ('{ip}', 60986) - "WebSocket /ws?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzaWQiOiJhZDA1NGNiMC05NzFjLTQ0NjUtYWEwNi01NDhkOGViNjRkOTkifQ.HH6QD6GWGJpGN5Ol1hDj2WBSzFuVJbzSHdkJgbbAfwc" [accepted] INFO: connection open Starting loop_recv for sid: ad054cb0-971c-4465-aa06-548d8eb64d99 INFO: {ip}:60987 - "GET /apple-touch-icon.png HTTP/1.1" 200 OK INFO: {ip}:60988 - "GET /favicon-16x16.png HTTP/1.1" 200 OK INFO: {ip}:60991 - "GET /api/messages/total HTTP/1.1" 200 OK INFO: {ip}:60992 - "GET /api/messages/total HTTP/1.1" 200 OK INFO: {ip}:60989 - "GET /api/refresh-files HTTP/1.1" 200 OK 09:02:57 - opendevin:INFO: llm.py:25 - Initializing LLM with model: gpt-3.5-turbo 09:02:57 - opendevin:INFO: exec_box.py:185 - Container stopped 09:02:57 - opendevin:INFO: exec_box.py:203 - Container started INFO: {ip}:61002 - "GET /index.html HTTP/1.1" 304 Not Modified INFO: ('{ip}', 61003) - "WebSocket /ws?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzaWQiOiJhZDA1NGNiMC05NzFjLTQ0NjUtYWEwNi01NDhkOGViNjRkOTkifQ.HH6QD6GWGJpGN5Ol1hDj2WBSzFuVJbzSHdkJgbbAfwc" [accepted] INFO: connection open 09:03:28 - opendevin:INFO: session.py:39 - WebSocket disconnected, sid: ad054cb0-971c-4465-aa06-548d8eb64d99 INFO: connection closed Starting loop_recv for sid: ad054cb0-971c-4465-aa06-548d8eb64d99 INFO: {ip}:61004 - "GET /locales/zh/translation.json HTTP/1.1" 404 Not Found 09:03:29 - opendevin:INFO: llm.py:25 - Initializing LLM with model: gpt-3.5-turbo 09:03:40 - opendevin:INFO: exec_box.py:185 - Container stopped 09:03:41 - opendevin:INFO: exec_box.py:203 - Container started INFO: {ip}:61005 - "GET /api/refresh-files HTTP/1.1" 200 OK INFO: {ip}:61006 - "GET /api/messages/total HTTP/1.1" 200 OK