Robert Brennan

Results 1267 comments of Robert Brennan

It looks like litellm thinks the model name is just `llama2`. Did you set `LLM_MODEL=llama2`? Or `LLM_MODEL=ollama/llama2`? It should be the latter

For ``` python opendevin/main.py -d ./workspace/ -i 100 -t "Write a bash script that prints 'hello world'" ``` Try running ``` PYTHONPATH=`pwd` python opendevin/main.py -d ./workspace/ -i 100 -t "Write...

Hmm...I think we're using llamaindex for the embeddings, not litellm

Closing in favor of https://github.com/OpenDevin/OpenDevin/issues/417

We've mostly been testing on OpenAI--llama (especially local 7B llama) is inevitably going to do worse :/ Might be worth creating a special agent for the less-capable LLMs

I'd suggest trying one of the 70B models, or using a paid LLM like GPT or Claude. You're also welcome to take a crack at building a new agent!

Changing the purpose of this issue to detecting and killing infinite loops If the controller notices the exact same action 3+ times, we should kill the loop

This is not actually fixed--I just confirmed on main. Will take a look

This should be solved as of https://github.com/OpenDevin/OpenDevin/pull/863 If not let us know!