stratte89
stratte89
> what do you mean exactly?  before i could see the steps in the opendevin server terminal. Because some model tend to stop replying in the...
> They're seeing the effect of this: [#378 (comment)](https://github.com/OpenDevin/OpenDevin/pull/378#discussion_r1552135801) > > The logs now don't print to console, instead they go into a file in ./logs .. I think. oh...
> > As above, please try to update. In addition, though, I'm not sure your setting for model works. If it doesn't work after update, I'd suggest to try LLM_MODEL="openai/Mistral-7B-Instruct-v0.2-GGUF"....
Use the oobabooga webui instead, here is a guide https://github.com/OpenDevin/OpenDevin/commit/08a2dfb01af1aec6743f5e4c23507d63980726c0#commitcomment-140559598 Ollama was repeating every sentence like 100 times without doing anything, tried most models, but oobabooge does create files, at...
Same I try to get it to work as well with lm-studio or ollama, both arent working, im on ubuntu for ollama LLM_BASE_URL="127.0.0.1:11434" LLM_MODEL="ollama/mistral" LLM_EMBEDDING_MODEL="local" WORKSPACE_DIR="./workspace" i run ollama serve...
> @stratte89 is that error message coming from the OpenDevin backend? Or is it coming from the LLM? > > We shouldn't be using much more memory, but 644MiB is...
I don't know if its related but this config file worked with the older version ``` LLM_API_KEY="ollama" LLM_BASE_URL="http://0.0.0.0:11434" LLM_MODEL="ollama/mistral" LLM_EMBEDDING_MODEL="local" WORKSPACE_DIR="./workspace" uvicorn opendevin.server.listen:app --port 3000 --host 0.0.0.0 npm run start...
Choose different llm's, i tried like 30 already on my i5 11400, 32gb ram, rtx3080 and many model gave me that error as well, i guess their just too weak...
> claude works try that one out what model are you using? can you provide a huggingface link? also what are your specs? just to compare
>   https://github.com/OpenDevin/OpenDevin/issues/718#issuecomment-2038342654 make sure that the model has a high context lenght like 25000+, make sure you have enough vram/ram depending on what architecture ur using...