Shaswata Roy

Results 5 comments of Shaswata Roy

> hii i git checkout rb/remove-key-assertion and set variable below set WORKSPACE_DIR=test set LLM_EMBEDDING_MODEL = llama2 set LLM_EMBEDDING_MODEL_BASE_URL = http://127.0.0.1:11434 > > still when i add prompt on step 0...

> It's defaulting to using OpenAI for the core model. Can you set `LLM_MODEL="ollama/llama2"` and see if that fixes it? STEP 99 Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new LiteLLM.Info: If...

> It looks like litellm thinks the model name is just `llama2`. Did you set `LLM_MODEL=llama2`? Or `LLM_MODEL=ollama/llama2`? It should be the latter yes both done. Still same issue.

> I got it running using > ``` > export LLM_MODEL=ollama/llama2 > export LLM_API_KEY= > export LLM_BASE_URL=http://localhost:11434 > PYTHONPATH=`pwd` python opendevin/main.py -d ./workspace/ -i 100 -t "Write a bash script...

> We've mostly been testing on OpenAI--llama (especially local 7B llama) is inevitably going to do worse :/ > > Might be worth creating a special agent for the less-capable...