Ollama Server is using an wrong api
Im under Linux (openSuse) and I installed everything correctly. I used WebUI with my OpenAI key and it worked like an charm! But then i setup an server using ollama serve and it wont work.
Log from console:
/web-ui> python webui.py --ip 127.0.0.1 --port 7788 INFO [browser_use] BrowserUse logging setup complete with level info INFO [root] Anonymized telemetry enabled. See https://docs.browser-use.com/development/telemetry for more information.
- Running on local URL: http://127.0.0.1:7788
To create a public link, set share=True in launch().
INFO [agent] 🚀 Starting task: go to google.com and type 'OpenAI' click search and give me the first url
INFO [src.agent.custom_agent]
📍 Step 1
ERROR [agent] ❌ Result failed 1/3 times:
model "deepseek-r1:14b" not found, try pulling it first (status code: 404)
INFO [src.agent.custom_agent]
📍 Step 1
ERROR [agent] ❌ Result failed 2/3 times:
model "deepseek-r1:14b" not found, try pulling it first (status code: 404)
INFO [src.agent.custom_agent]
📍 Step 1
ERROR [agent] ❌ Result failed 3/3 times:
model "deepseek-r1:14b" not found, try pulling it first (status code: 404)
ERROR [agent] ❌ Stopping due to 3 consecutive failures
The Ollama Server is also online, I tried it, but it doesnt have the /api/chat direktory that WebUI wants. Any Fixes?
try doing
sudo docker exec -it ollama ollama run deepseek-r1:14b
this will pull deepseek-r1:14b model weights from internet on you computer and open chat with deepseek-r1:14b model
then simply chat /bye
And try running your code once again
Looking at the .env, the api_endpoint for ollama is wrong, it is: http://localhost:11434 but should be http://localhost:11434/v1
Maybe that can fix it
This fixed it for me!
I'm trying to use with ollama in docker.
already tried:
- http://ollama:11434/v1 (uses docker dns resolutions)
- http://localhost:11434/v1
- http://host.docker.internal:11434/v1
Also tried without "v1" on ollama host.
INFO [src.webui.components.browser_use_agent_tab] Submit button clicked for new task.
INFO [src.webui.components.browser_use_agent_tab] Initializing LLM: Provider=ollama, Model=qwen2.5:7b, Temp=0.6
INFO [src.webui.components.browser_use_agent_tab] Initializing LLM: Provider=ollama, Model=qwen2.5:7b, Temp=0.6
INFO [src.webui.components.browser_use_agent_tab] Initializing new agent for task: find ollama on google
INFO [agent] 🧠 Starting an agent with main_model=qwen2.5:7b +vision +memory, planner_model=qwen2.5:7b, extraction_model=None
ERROR [src.webui.components.browser_use_agent_tab] Error setting up agent task: Failed to connect to Ollama. Please check that Ollama is downloaded, running and accessible. https://ollama.com/download
Traceback (most recent call last):
File "/app/src/webui/components/browser_use_agent_tab.py", line 525, in run_agent_task
webui_manager.bu_agent = BrowserUseAgent(
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/browser_use/utils.py", line 305, in wrapper
result = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/browser_use/agent/service.py", line 270, in __init__
self.memory = Memory(
^^^^^^^
File "/usr/local/lib/python3.11/site-packages/browser_use/agent/memory/service.py", line 82, in __init__
self.mem0 = Mem0Memory.from_config(config_dict=self.config.full_config_dict)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/mem0/memory/main.py", line 87, in from_config
return cls(config)
^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/mem0/memory/main.py", line 46, in __init__
self.embedding_model = EmbedderFactory.create(
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/mem0/utils/factory.py", line 66, in create
return embedder_instance(base_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/mem0/embeddings/ollama.py", line 32, in __init__
self._ensure_model_exists()
File "/usr/local/lib/python3.11/site-packages/mem0/embeddings/ollama.py", line 38, in _ensure_model_exists
local_models = self.client.list()["models"]
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/ollama/_client.py", line 567, in list
return self._request(
^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/ollama/_client.py", line 178, in _request
return cls(**self._request_raw(*args, **kwargs).json())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/ollama/_client.py", line 124, in _request_raw
raise ConnectionError(CONNECTION_ERROR_MESSAGE) from None
ConnectionError: Failed to connect to Ollama. Please check that Ollama is downloaded, running and accessible. https://ollama.com/download
edit
i've found out that ollama is receiving AND handling the request, but after it finishes, the ui reply with an error message:
I am having the same issue here,
(browser-use) PS C:\Users\Administrator> cd C:\tmp\web-ui
(browser-use) PS C:\tmp\web-ui> python webui.py --ip 127.0.0.1 --port 7788
- Running on local URL: http://127.0.0.1:7788
To create a public link, set share=True in launch().
constantly getting this error when submitting tasks,
Setup Error: Failed to connect to Ollama. Please check that Ollama is downloaded, running and accessible. https://ollama.com/download
`
(browser-use) PS C:\Users\Administrator> cd C:\tmp\web-ui
(browser-use) PS C:\tmp\web-ui> python webui.py --ip 127.0.0.1 --port 7788
- Running on local URL: http://127.0.0.1:7788
To create a public link, set share=True in launch().
INFO [src.webui.components.browser_use_agent_tab] Submit button clicked for new task.
INFO [src.webui.components.browser_use_agent_tab] Initializing LLM: Provider=ollama, Model=qwen2.5:7b, Temp=0.6
INFO [src.webui.components.browser_use_agent_tab] Launching new browser instance.
INFO [src.webui.components.browser_use_agent_tab] Creating new browser context.
INFO [src.webui.components.browser_use_agent_tab] Initializing new agent for task: 1 open https://10.10.100.4:8006/ in browser
2 end
INFO [agent] 🧠 Starting an agent with main_model=qwen2.5:7b +vision +memory, planner_model=None, extraction_model=None
ERROR [src.webui.components.browser_use_agent_tab] Error setting up agent task: Failed to connect to Ollama. Please check that Ollama is downloaded, running and accessible. https://ollama.com/download
Traceback (most recent call last):
File "C:\tmp\web-ui\src\webui\components\browser_use_agent_tab.py", line 529, in run_agent_task
webui_manager.bu_agent = BrowserUseAgent(
^^^^^^^^^^^^^^^^
File "C:\Users\Administrator.conda\envs\browser-use\Lib\site-packages\browser_use\utils.py", line 305, in wrapper
result = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator.conda\envs\browser-use\Lib\site-packages\browser_use\agent\service.py", line 269, in init
self.memory = Memory(
^^^^^^^
File "C:\Users\Administrator.conda\envs\browser-use\Lib\site-packages\browser_use\agent\memory\service.py", line 82, in init
self.mem0 = Mem0Memory.from_config(config_dict=self.config.full_config_dict)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator.conda\envs\browser-use\Lib\site-packages\mem0\memory\main.py", line 87, in from_config
return cls(config)
^^^^^^^^^^^
File "C:\Users\Administrator.conda\envs\browser-use\Lib\site-packages\mem0\memory\main.py", line 46, in init
self.embedding_model = EmbedderFactory.create(
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator.conda\envs\browser-use\Lib\site-packages\mem0\utils\factory.py", line 66, in create
return embedder_instance(base_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator.conda\envs\browser-use\Lib\site-packages\mem0\embeddings\ollama.py", line 32, in init
self._ensure_model_exists()
File "C:\Users\Administrator.conda\envs\browser-use\Lib\site-packages\mem0\embeddings\ollama.py", line 38, in _ensure_model_exists
local_models = self.client.list()["models"]
^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator.conda\envs\browser-use\Lib\site-packages\ollama_client.py", line 569, in list
return self._request(
^^^^^^^^^^^^^^
File "C:\Users\Administrator.conda\envs\browser-use\Lib\site-packages\ollama_client.py", line 180, in _request
return cls(**self._request_raw(*args, **kwargs).json())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator.conda\envs\browser-use\Lib\site-packages\ollama_client.py", line 126, in _request_raw
raise ConnectionError(CONNECTION_ERROR_MESSAGE) from None
ConnectionError: Failed to connect to Ollama. Please check that Ollama is downloaded, running and accessible. https://ollama.com/download
`
Any suggestions?
p.s. tied whith this code to test connction ,and getting the same error.
(browser-use) PS C:\tmp\web-ui> python -c "import ollama; print(ollama.Client().list())" Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Users\Administrator\.conda\envs\browser-use\Lib\site-packages\ollama\_client.py", line 569, in list return self._request( ^^^^^^^^^^^^^^ File "C:\Users\Administrator\.conda\envs\browser-use\Lib\site-packages\ollama\_client.py", line 180, in _request return cls(**self._request_raw(*args, **kwargs).json()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Administrator\.conda\envs\browser-use\Lib\site-packages\ollama\_client.py", line 126, in _request_raw raise ConnectionError(CONNECTION_ERROR_MESSAGE) from None ConnectionError: Failed to connect to Ollama. Please check that Ollama is downloaded, running and accessible. https://ollama.com/download (browser-use) PS C:\tmp\web-ui>
When I set $env:OLLAMA_HOST="http://localhost:11434" it works fine.
(browser-use) PS C:\tmp\web-ui> $env:OLLAMA_HOST="http://localhost:11434" (browser-use) PS C:\tmp\web-ui> python -c "import ollama; print(ollama.Client().list())" models=[Model(model='qwen2.5vl:32b', modified_at=datetime.datetime(2025, 5, 20, 12, 30, 20, 515347, tzinfo=TzInfo(+08:00)), digest='3edc3a52fe988de3e8ba4f99ac1f21a1bbc35e1af32a74983fe4e1667d6b6188', size=21159310657, details=ModelDetails(parent_model='', format='gguf', family='qwen25vl', families=['qwen25vl'], parameter_size='33.5B', quantization_level='Q4_K_M')), Model(model='qwen3:0.6b', modified_at=datetime.datetime(2025, 5, 14, 18, 6, 9, 245251, tzinfo=TzInfo(+08:00)), digest='3bae9c93586b27bedaa979979733c2b0edd1d0defc745e9638f2161192a0ccf0', size=522653526, details=ModelDetails(parent_model='', format='gguf', family='qwen3', families=['qwen3'], parameter_size='751.63M', quantization_level='Q4_K_M')), Model(model='qwen3:30b-a3b-30k', modified_at=datetime.datetime(2025, 5, 12, 16, 21, 4, 779400, tzinfo=TzInfo(+08:00)), digest='5b4f97b2ffd5d032d5a2bcc1c9830c190227425fcb0562563bd41d064b822693', size=18622562954, details=ModelDetails(parent_model='', format='gguf', family='qwen3moe', families=['qwen3moe'], parameter_size='30.5B', quantization_level='Q4_K_M')), Model(model='qwen2.5-coder:32b-21k', modified_at=datetime.datetime(2025, 3, 4, 17, 29, 52, 465840, tzinfo=TzInfo(+08:00)), digest='ad0fbed1884a5a666b8c4b7a75c70c6b8026d6a701397edd5bb98a2e5e7d97ca', size=19851349948, details=ModelDetails(parent_model='', format='gguf', family='qwen2', families=['qwen2'], parameter_size='32.8B', quantization_level='Q4_K_M')), Model(model='deepseek-r1:32b-21k', modified_at=datetime.datetime(2025, 3, 4, 17, 19, 41, 513705, tzinfo=TzInfo(+08:00)), digest='61081ab0242b54bd6185bbc1bd851513771652b9f53cc6d228c605ee41a547cc', size=19851337656, details=ModelDetails(parent_model='', format='gguf', family='qwen2', families=['qwen2'], parameter_size='32.8B', quantization_level='Q4_K_M'))] (browser-use) PS C:\tmp\web-ui>
does it means that ollama client is not getting the correct value from .env?
I had the same problem. Using the UI have tried a lot to get it up and running. For my understanding seems like not all setting I have set using the UI are used.
Finaly I have made two file changes: __ at .env file have set like this: OLLAMA_ENDPOINT=http://gpu5.name.name.net:11434 Note it have not worked if I used path /v1! __ Also I changed the compose file by adding: extra_host section with dns mapping.
ports: - "7788:7788" - "6080:6080" - "5901:5901" - "9222:9222" extra_hosts: - "gpu5.name.name.net:10.10.14.78" environment:
At UI I only have changed the LLM Provider to: "ollama" and LLM Model Name to: "deepseek-r1:14b" I do not set Base URL again. Also I have not used any settings of Planner LLM Provider.