OpenHands icon indicating copy to clipboard operation
OpenHands copied to clipboard

Option to use Ollama with local models instead of OpenAI/ChatGPT

Open jefferyb opened this issue 1 year ago • 2 comments

Hi team, I was wondering if it would be possible to have an option to use Ollama with local models instead of OpenAI/ChatGPT with the OpenDevin project.

I was thinking it could provide some benefits, such as:

  • Cost savings: Ollama is an open-source platform, which could significantly reduce the cost compared to OpenAI/ChatGPT.

  • Flexibility & Security: Ollama allows the use of different models ( and can create your own ), which could be more suitable for certain scenarios where data privacy or security concerns might arise.

  • Also beneficial for those that prefer to keep their data on-premises/locally.

Just thought I would put it out there/ask :) Thank you for your time and consideration. -Jeffery

jefferyb avatar Mar 25 '24 16:03 jefferyb

Just saw this, https://github.com/OpenDevin/OpenDevin/issues/141, after posting mine...

jefferyb avatar Mar 25 '24 16:03 jefferyb

As an intermediate solution: openrouter.ai has an OpenAI-like API, many tools (LangChain, LiteLLM, etc) have integrations, and while it doesn't support local models, it does support many many models, some from Huggingface too. So I think that is a relatively easy integration. On the other hand, by suddenly supporting many models raises the question: should different agents have the option to use different models? E.g. planning by chatgpt 3.5, but implementing by Claude? (or adding code by expensive model, extending documentation by a cheap one.)

Actually, I just found #115 which is exactly about Openrouter. Sorry for not spotting sooner.

dvolgyes avatar Mar 25 '24 21:03 dvolgyes

See here for how to select a different model: https://github.com/OpenDevin/OpenDevin?tab=readme-ov-file#picking-a-model

Thanks all!

rbren avatar Mar 26 '24 11:03 rbren

This is a template. Run cp config.toml.template config.toml to use it.

LLM_API_KEY="ollama" WORKSPACE_DIR="./workspace" LLM_BASE_URL="http://192.168.0.21:11434" LLM_MODEL= "openchat:7b-v3.5-1210-q5_K_M" LLM_EMBEDDING_MODEL="ollama_chat/nomic-embed-text:latest"

hqnicolas avatar Mar 31 '24 00:03 hqnicolas

@hqnicolas is there an error message you're seeing?

Here's your problem: LLM_API_KEY="ollama"

That's supposed to be an API key, not a model name. You probably just want to remove it.

rbren avatar Mar 31 '24 15:03 rbren

@rbren That API Key on ollama project is a dummy value by standart, you can use anything

After the Ollama 1.24 you can use it as a OpenAi API here i'm using 1.30ROCm and the ollama API is open without key with a dummy key

I think my mistake was on: LLM_MODEL= "openchat:7b-v3.5-1210-q5_K_M" or LLM_MODEL= "ollama_chat/openchat:7b-v3.5-1210-q5_K_M" I dont remember what is the right one....

Here everithing was working fine! the project was amazing. it was working as an Microsoft AutoGen Studio I think I need a bigger GPU

hqnicolas avatar Apr 01 '24 22:04 hqnicolas