stratte89

Results 32 comments of stratte89

> what do you mean exactly? ![Bildschirmfoto vom 2024-04-05 02-45-49](https://github.com/OpenDevin/OpenDevin/assets/101955267/5f28c4f3-1235-4dbd-bfb7-e55763847bbf) before i could see the steps in the opendevin server terminal. Because some model tend to stop replying in the...

> They're seeing the effect of this: [#378 (comment)](https://github.com/OpenDevin/OpenDevin/pull/378#discussion_r1552135801) > > The logs now don't print to console, instead they go into a file in ./logs .. I think. oh...

> > As above, please try to update. In addition, though, I'm not sure your setting for model works. If it doesn't work after update, I'd suggest to try LLM_MODEL="openai/Mistral-7B-Instruct-v0.2-GGUF"....

Use the oobabooga webui instead, here is a guide https://github.com/OpenDevin/OpenDevin/commit/08a2dfb01af1aec6743f5e4c23507d63980726c0#commitcomment-140559598 Ollama was repeating every sentence like 100 times without doing anything, tried most models, but oobabooge does create files, at...

Same I try to get it to work as well with lm-studio or ollama, both arent working, im on ubuntu for ollama LLM_BASE_URL="127.0.0.1:11434" LLM_MODEL="ollama/mistral" LLM_EMBEDDING_MODEL="local" WORKSPACE_DIR="./workspace" i run ollama serve...

> @stratte89 is that error message coming from the OpenDevin backend? Or is it coming from the LLM? > > We shouldn't be using much more memory, but 644MiB is...

I don't know if its related but this config file worked with the older version ``` LLM_API_KEY="ollama" LLM_BASE_URL="http://0.0.0.0:11434" LLM_MODEL="ollama/mistral" LLM_EMBEDDING_MODEL="local" WORKSPACE_DIR="./workspace" uvicorn opendevin.server.listen:app --port 3000 --host 0.0.0.0 npm run start...

Choose different llm's, i tried like 30 already on my i5 11400, 32gb ram, rtx3080 and many model gave me that error as well, i guess their just too weak...

> claude works try that one out what model are you using? can you provide a huggingface link? also what are your specs? just to compare

> ![grafik](https://private-user-images.githubusercontent.com/95778421/319923881-7b7b8ec8-ec96-4216-b915-81a605c835c6.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTIzMDczMzEsIm5iZiI6MTcxMjMwNzAzMSwicGF0aCI6Ii85NTc3ODQyMS8zMTk5MjM4ODEtN2I3YjhlYzgtZWM5Ni00MjE2LWI5MTUtODFhNjA1YzgzNWM2LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA0MDUlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNDA1VDA4NTAzMVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTA3N2JhYzI0NTFjYzlhNGUwNmQ3NjkzY2Q1MmM4MjI3N2IzMzQ4N2ZiM2E4MzI0MGQ5OWE2YTU2MDE3MGM3NDkmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.9oXZxPObQ6I1qcrbfHQbrSewXJ22z2l2NiuqNVpbnKM) ![Screenshot 2024-04-05 104352](https://private-user-images.githubusercontent.com/95778421/319924291-f0466431-fd3d-4e3f-af1e-8bbab27125ee.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTIzMDczMzEsIm5iZiI6MTcxMjMwNzAzMSwicGF0aCI6Ii85NTc3ODQyMS8zMTk5MjQyOTEtZjA0NjY0MzEtZmQzZC00ZTNmLWFmMWUtOGJiYWIyNzEyNWVlLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA0MDUlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNDA1VDA4NTAzMVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTViY2RmMmU2MmRjZGZlZGQzZDE4MDI4ZjcxZjgyYjcxZTc1NzJjYmEyNGI5Y2RlYzJlN2Q2MzRhOWNhZWVhNzcmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.iSOdh83_YHXMstP5vQW_6XoQppzQ5fK-w5sKFJtF48A) https://github.com/OpenDevin/OpenDevin/issues/718#issuecomment-2038342654 make sure that the model has a high context lenght like 25000+, make sure you have enough vram/ram depending on what architecture ur using...