dpro
dpro
#347 and #307 are related, and this is likely a duplicate, although an important issue for local models.
It is mentioned in the log file.
My error was slightly different, but it was fixed after upgrading to latest commit
1. Make sure you are using a virtual environment while installing everything. I would not use the one suggested, instead conda. This solves lots of issues (conda) 2. pull /...
I recommend using a litellm proxy to the ollama server, as implementation is buggy. Here is an example config: ``` LLM_API_KEY="ollama" LLM_BASE_URL="http://localhost:4000" LLM_MODEL="ollama/dolphin" LLM_EMBEDDING_MODEL="llama" WORKSPACE_DIR="./workspace" MAX_ITERATIONS=100 ``` with litellm server:...
is it working for you? If so, ios version and device?
What model iphone are you using? 7?
Mine still hangs even though I have the variable you suggest set to local, so I'm still unsure as to the reason for this behavior.
I don't understand why it's sending 100 identical requests under enumerating steps though... (edited my last comment because I'm blind and didn't see you link your post)
Might be an issue with ollama, as 100 is the default iterations for agent runs. I'll try to setup devin with text generation webui and openai api and see if...