open-interpreter icon indicating copy to clipboard operation
open-interpreter copied to clipboard

problem running with ollama

Open williamlee1982 opened this issue 1 year ago • 3 comments

Describe the bug

get the following error

`Loading llama3.1...

Traceback (most recent call last): File "/opt/homebrew/bin/interpreter", line 8, in sys.exit(main()) ^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/interpreter/terminal_interface/start_terminal_interface.py", line 610, in main start_terminal_interface(interpreter) File "/opt/homebrew/lib/python3.11/site-packages/interpreter/terminal_interface/start_terminal_interface.py", line 473, in start_terminal_interface interpreter = profile( ^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/interpreter/terminal_interface/profiles/profiles.py", line 64, in profile return apply_profile(interpreter, profile, profile_path) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/interpreter/terminal_interface/profiles/profiles.py", line 148, in apply_profile exec(profile["start_script"], scope, scope) File "", line 1, in File "/opt/homebrew/lib/python3.11/site-packages/interpreter/core/core.py", line 145, in local_setup self = local_setup(self) ^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/interpreter/terminal_interface/local_setup.py", line 314, in local_setup interpreter.computer.ai.chat("ping") File "/opt/homebrew/lib/python3.11/site-packages/interpreter/core/computer/ai/ai.py", line 134, in chat for chunk in self.computer.interpreter.llm.run(messages): File "/opt/homebrew/lib/python3.11/site-packages/interpreter/core/llm/llm.py", line 86, in run self.load() File "/opt/homebrew/lib/python3.11/site-packages/interpreter/core/llm/llm.py", line 397, in load self.interpreter.computer.ai.chat("ping") File "/opt/homebrew/lib/python3.11/site-packages/interpreter/core/computer/ai/ai.py", line 134, in chat for chunk in self.computer.interpreter.llm.run(messages): File "/opt/homebrew/lib/python3.11/site-packages/interpreter/core/llm/llm.py", line 324, in run yield from run_text_llm(self, params) File "/opt/homebrew/lib/python3.11/site-packages/interpreter/core/llm/run_text_llm.py", line 20, in run_text_llm for chunk in llm.completions(**params): File "/opt/homebrew/lib/python3.11/site-packages/interpreter/core/llm/llm.py", line 466, in fixed_litellm_completions raise first_error # If all attempts fail, raise the first error ^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/interpreter/core/llm/llm.py", line 443, in fixed_litellm_completions yield from litellm.completion(**params) File "/opt/homebrew/lib/python3.11/site-packages/litellm/llms/ollama.py", line 371, in ollama_completion_stream raise e File "/opt/homebrew/lib/python3.11/site-packages/litellm/llms/ollama.py", line 329, in ollama_completion_stream raise OllamaError( litellm.llms.ollama.OllamaError: b''`

Reproduce

  1. interpreter --local
  2. select Ollama
  3. select llama3.1

Expected behavior

work

Screenshots

No response

Open Interpreter version

0.3.12

Python version

Python 3.12.5

Operating System name and version

masOS 14.6.1

Additional context

No response

williamlee1982 avatar Sep 05 '24 08:09 williamlee1982

same problem here, and I can run interpreter on a ubuntu server and connect to remote ollama, but get the ollam error when running interpreter on my mac

everiori avatar Sep 07 '24 02:09 everiori

Same here on macOS 15.0.1

duchu avatar Oct 24 '24 16:10 duchu

Make sure you are using python 3.10 or 3.11. the latest oi, and latest ollama

MikeBirdTech avatar Oct 24 '24 16:10 MikeBirdTech

I have latest ollama and python 3.12, but i have notice that all model fails except phi3:3.8b.

duchu avatar Oct 28 '24 11:10 duchu

@duchu 3.12 is not a supported version.

I'm able to use llama3.1 without issue. Please change your python version and make sure you're on the latest Open Interpreter

i --local

Open Interpreter supports multiple local model providers.

[?] Select a provider:
 > Ollama
   Llamafile
   LM Studio
   Jan

[?] Select a model:
   llama3.2
   llama3.1:8b-instruct-q8_0
 > llama3.1
   dolphin-llama3
   codestral
   nemotron
   llava:34b
   llava
   mistral-nemo
   gemma2:27b
   gemma2
   ↓ Download phi3
   Browse Models ↗

Loading llama3.1...

Model loaded.


▌ Model set to llama3.1

Open Interpreter will require approval before running code.

Use interpreter -y to bypass this.

Press CTRL-C to exit.

> create a directory called test


  mkdir test


  Would you like to run this code? (y/n)

  y


  mkdir test



  A new empty directory has been created on your machine; it is now ready for use.

  (No further action is required.)

MikeBirdTech avatar Oct 28 '24 12:10 MikeBirdTech