TheLocalLab
TheLocalLab
> Have you tried to set a different LLM_MODEL this way? https://github.com/OpenDevin/OpenDevin/blob/main/README.md#picking-a-model Yes, but since I'm on windows using PowerShell, I used: $env:LLM_API_KEY = $env:LLM_MODEL = instead.
> os.environ["OLLAMA_HOST"] = "http://localhost:11434" > > import asyncio > from browser_use import Agent > from browser_use.agent.views import AgentHistoryList > from langchain_ollama import ChatOllama > > > async def run_search() ->...
> > > os.environ["OLLAMA_HOST"] = "http://localhost:11434" > > > import asyncio > > > from browser_use import Agent > > > from browser_use.agent.views import AgentHistoryList > > > from langchain_ollama...
Yeah got the same error trying to use the Groq API as well.
Yeah this is happening with me as well. I tried three different local models running with ollama(llama3.2, llama3.1 and gemma2) and the same things keep on happening. When on the...