open-interpreter icon indicating copy to clipboard operation
open-interpreter copied to clipboard

Open Interpreter cannot serve/completion remote Ollama models despite successful connection and listing

Open iamshreeram opened this issue 5 months ago • 0 comments

Describe the bug

When connecting Open Interpreter (on Windows) to a remote Ollama server (on macOS), Open Interpreter and the ollama client can successfully connect and list available models/images. However, when attempting to use a model (e.g., deepseek-r1:14b), Open Interpreter fails to serve/completion with a model not found error:

{"error":"model 'deepseek-r1:14b' not found"}

This happens even though the model is visible in the remote model list.

Reproduce

Steps to reproduce the behavior:

  1. Start Ollama server on macOS (e.g., OLLAMA_HOST=0.0.0.0:11434 ollama serve) and ensure the model (e.g., deepseek-r1:14b) is pulled.
  2. On Windows machine, set OLLAMA_HOST to the Mac's IP (e.g., http://10.91.55.192:11434).
  3. Use ollama list and/or Open Interpreter to confirm the model is listed.
  4. In Open Interpreter, attempt to use the model (e.g., generate a completion or start a session).

Expected behavior

Open Interpreter should be able to use/serve any model that is listed by the remote Ollama server without errors.

Screenshots Error log:

{"error":"model 'deepseek-r1:14b' not found"}

Open Interpreter version 0.4.3

Python version 3.12

Operating System name and version Windows (client), macOS (Ollama server) [please specify full versions]

Screenshots

No response

Open Interpreter version

0.4.3

Python version

3.12

Operating System name and version

Windows 11

Additional context

Additional context

  • Network connectivity is confirmed (listing models works from both Open Interpreter and native ollama client).
  • Only model usage (serving/completion) fails, not listing.
  • Model names and tags match exactly.
  • Firewall and listening address already configured correctly.

Stack trace

PS C:\Users\ram> $env:OLLAMA_HOST="http://10.91.55.192:11434"
PS C:\Users\ram> interpreter --local
C:\Users\ram\AppData\Local\Programs\Python\Python312\Lib\site-packages\interpreter\core\utils\system_debug_info.py:4: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81.
  import pkg_resources

Open Interpreter supports multiple local model providers.

[?] Select a provider:
 > Ollama
   Llamafile
   LM Studio
   Jan

[?] Select a model:
   gemma3n
 > deepseek-r1:14b
   devstral
   deepseek-r1
   gemma3
   ↓ Download llama3.1
   ↓ Download phi3
   ↓ Download mistral-nemo
   ↓ Download gemma2
   ↓ Download codestral
   Browse Models ↗

Loading deepseek-r1:14b...

Traceback (most recent call last):
  File "C:\Users\ram\AppData\Local\Programs\Python\Python312\Lib\site-packages\litellm\llms\custom_httpx\llm_http_handler.py", line 170, in _make_common_sync_call
    response = sync_httpx_client.post(
               ^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ram\AppData\Local\Programs\Python\Python312\Lib\site-packages\litellm\llms\custom_httpx\http_handler.py", line 754, in post
    raise e
  File "C:\Users\ram\AppData\Local\Programs\Python\Python312\Lib\site-packages\litellm\llms\custom_httpx\http_handler.py", line 736, in post
    response.raise_for_status()
  File "C:\Users\ram\AppData\Local\Programs\Python\Python312\Lib\site-packages\httpx\_models.py", line 829, in raise_for_status
    raise HTTPStatusError(message, request=request, response=self)
httpx.HTTPStatusError: Client error '404 Not Found' for url 'http://localhost:11434/api/generate'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\ram\AppData\Local\Programs\Python\Python312\Lib\site-packages\litellm\main.py", line 2998, in completion
    response = base_llm_http_handler.completion(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ram\AppData\Local\Programs\Python\Python312\Lib\site-packages\litellm\llms\custom_httpx\llm_http_handler.py", line 430, in completion
    completion_stream, headers = self.make_sync_call(
                                 ^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ram\AppData\Local\Programs\Python\Python312\Lib\site-packages\litellm\llms\custom_httpx\llm_http_handler.py", line 520, in make_sync_call
    response = self._make_common_sync_call(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ram\AppData\Local\Programs\Python\Python312\Lib\site-packages\litellm\llms\custom_httpx\llm_http_handler.py", line 195, in _make_common_sync_call
    raise self._handle_error(e=e, provider_config=provider_config)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ram\AppData\Local\Programs\Python\Python312\Lib\site-packages\litellm\llms\custom_httpx\llm_http_handler.py", line 2375, in _handle_error
    raise provider_config.get_error_class(
litellm.llms.ollama.common_utils.OllamaError: {"error":"model 'deepseek-r1:14b' not found"}

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "C:\Users\ram\AppData\Local\Programs\Python\Python312\Scripts\interpreter.exe\__main__.py", line 7, in <module>
  File "C:\Users\ram\AppData\Local\Programs\Python\Python312\Lib\site-packages\interpreter\terminal_interface\start_terminal_interface.py", line 612, in main
    start_terminal_interface(interpreter)
  File "C:\Users\ram\AppData\Local\Programs\Python\Python312\Lib\site-packages\interpreter\terminal_interface\start_terminal_interface.py", line 471, in start_terminal_interface
    interpreter = profile(
                  ^^^^^^^^
  File "C:\Users\ram\AppData\Local\Programs\Python\Python312\Lib\site-packages\interpreter\terminal_interface\profiles\profiles.py", line 64, in profile
    return apply_profile(interpreter, profile, profile_path)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ram\AppData\Local\Programs\Python\Python312\Lib\site-packages\interpreter\terminal_interface\profiles\profiles.py", line 148, in apply_profile
    exec(profile["start_script"], scope, scope)
  File "<string>", line 1, in <module>
  File "C:\Users\ram\AppData\Local\Programs\Python\Python312\Lib\site-packages\interpreter\core\core.py", line 145, in local_setup
    self = local_setup(self)
           ^^^^^^^^^^^^^^^^^
  File "C:\Users\ram\AppData\Local\Programs\Python\Python312\Lib\site-packages\interpreter\terminal_interface\local_setup.py", line 314, in local_setup
    interpreter.computer.ai.chat("ping")
  File "C:\Users\ram\AppData\Local\Programs\Python\Python312\Lib\site-packages\interpreter\core\computer\ai\ai.py", line 134, in chat
    for chunk in self.computer.interpreter.llm.run(messages):
  File "C:\Users\ram\AppData\Local\Programs\Python\Python312\Lib\site-packages\interpreter\core\llm\llm.py", line 86, in run
    self.load()
  File "C:\Users\ram\AppData\Local\Programs\Python\Python312\Lib\site-packages\interpreter\core\llm\llm.py", line 397, in load
    self.interpreter.computer.ai.chat("ping")
  File "C:\Users\ram\AppData\Local\Programs\Python\Python312\Lib\site-packages\interpreter\core\computer\ai\ai.py", line 134, in chat
    for chunk in self.computer.interpreter.llm.run(messages):
  File "C:\Users\ram\AppData\Local\Programs\Python\Python312\Lib\site-packages\interpreter\core\llm\llm.py", line 324, in run
    yield from run_text_llm(self, params)
  File "C:\Users\ram\AppData\Local\Programs\Python\Python312\Lib\site-packages\interpreter\core\llm\run_text_llm.py", line 20, in run_text_llm
    for chunk in llm.completions(**params):
  File "C:\Users\ram\AppData\Local\Programs\Python\Python312\Lib\site-packages\interpreter\core\llm\llm.py", line 466, in fixed_litellm_completions
    raise first_error  # If all attempts fail, raise the first error
    ^^^^^^^^^^^^^^^^^
  File "C:\Users\ram\AppData\Local\Programs\Python\Python312\Lib\site-packages\interpreter\core\llm\llm.py", line 443, in fixed_litellm_completions
    yield from litellm.completion(**params)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ram\AppData\Local\Programs\Python\Python312\Lib\site-packages\litellm\utils.py", line 1303, in wrapper
    raise e
  File "C:\Users\ram\AppData\Local\Programs\Python\Python312\Lib\site-packages\litellm\utils.py", line 1178, in wrapper
    result = original_function(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ram\AppData\Local\Programs\Python\Python312\Lib\site-packages\litellm\main.py", line 3311, in completion
    raise exception_type(
          ^^^^^^^^^^^^^^^
  File "C:\Users\ram\AppData\Local\Programs\Python\Python312\Lib\site-packages\litellm\litellm_core_utils\exception_mapping_utils.py", line 2271, in exception_type
    raise e
  File "C:\Users\ram\AppData\Local\Programs\Python\Python312\Lib\site-packages\litellm\litellm_core_utils\exception_mapping_utils.py", line 2240, in exception_type
    raise APIConnectionError(
litellm.exceptions.APIConnectionError: litellm.APIConnectionError: OllamaException - {"error":"model 'deepseek-r1:14b' not found"}

iamshreeram avatar Jun 30 '25 05:06 iamshreeram