open-interpreter icon indicating copy to clipboard operation
open-interpreter copied to clipboard

OLLAMA LLAMA 3.2 fails to run with JSON Encoding error

Open meetr1912 opened this issue 1 year ago • 15 comments

Describe the bug

interpreter --local

Open Interpreter supports multiple local model providers.

[?] Select a provider:

Ollama Llamafile LM Studio Jan

[?] Select a model: llama3.2

↓ Download llama3.1 ↓ Download phi3 ↓ Download mistral-nemo ↓ Download gemma2 ↓ Download codestral Browse Models ↗

Downloading llama3.1...

pulling manifest pulling 8eeb52dfb3bb... 100% ▕████████████████▏ 4.7 GB
pulling 948af2743fc7... 100% ▕████████████████▏ 1.5 KB
pulling 0ba8f0e314b4... 100% ▕████████████████▏ 12 KB
pulling 56bb8bd477a5... 100% ▕████████████████▏ 96 B
pulling 1a4c3c319823... 100% ▕████████████████▏ 485 B
verifying sha256 digest writing manifest success Loading llama3.1...

Traceback (most recent call last): File "/opt/anaconda3/bin/interpreter", line 8, in sys.exit(main()) ^^^^^^ File "/opt/anaconda3/lib/python3.12/site-packages/interpreter/terminal_interface/start_terminal_interface.py", line 612, in main start_terminal_interface(interpreter) File "/opt/anaconda3/lib/python3.12/site-packages/interpreter/terminal_interface/start_terminal_interface.py", line 471, in start_terminal_interface interpreter = profile( ^^^^^^^^ File "/opt/anaconda3/lib/python3.12/site-packages/interpreter/terminal_interface/profiles/profiles.py", line 64, in profile return apply_profile(interpreter, profile, profile_path) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/anaconda3/lib/python3.12/site-packages/interpreter/terminal_interface/profiles/profiles.py", line 148, in apply_profile exec(profile["start_script"], scope, scope) File "", line 1, in File "/opt/anaconda3/lib/python3.12/site-packages/interpreter/core/core.py", line 145, in local_setup self = local_setup(self) ^^^^^^^^^^^^^^^^^ File "/opt/anaconda3/lib/python3.12/site-packages/interpreter/terminal_interface/local_setup.py", line 314, in local_setup interpreter.computer.ai.chat("ping") File "/opt/anaconda3/lib/python3.12/site-packages/interpreter/core/computer/ai/ai.py", line 134, in chat for chunk in self.computer.interpreter.llm.run(messages): File "/opt/anaconda3/lib/python3.12/site-packages/interpreter/core/llm/llm.py", line 86, in run self.load() File "/opt/anaconda3/lib/python3.12/site-packages/interpreter/core/llm/llm.py", line 397, in load self.interpreter.computer.ai.chat("ping") File "/opt/anaconda3/lib/python3.12/site-packages/interpreter/core/computer/ai/ai.py", line 134, in chat for chunk in self.computer.interpreter.llm.run(messages): File "/opt/anaconda3/lib/python3.12/site-packages/interpreter/core/llm/llm.py", line 322, in run yield from run_tool_calling_llm(self, params) File "/opt/anaconda3/lib/python3.12/site-packages/interpreter/core/llm/run_tool_calling_llm.py", line 178, in run_tool_calling_llm for chunk in llm.completions(**request_params): File "/opt/anaconda3/lib/python3.12/site-packages/interpreter/core/llm/llm.py", line 466, in fixed_litellm_completions raise first_error # If all attempts fail, raise the first error ^^^^^^^^^^^^^^^^^ File "/opt/anaconda3/lib/python3.12/site-packages/interpreter/core/llm/llm.py", line 443, in fixed_litellm_completions yield from litellm.completion(**params) File "/opt/anaconda3/lib/python3.12/site-packages/litellm/llms/ollama.py", line 428, in ollama_completion_stream raise e File "/opt/anaconda3/lib/python3.12/site-packages/litellm/llms/ollama.py", line 406, in ollama_completion_stream function_call = json.loads(response_content) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/anaconda3/lib/python3.12/json/init.py", line 346, in loads return _default_decoder.decode(s) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/anaconda3/lib/python3.12/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/anaconda3/lib/python3.12/json/decoder.py", line 353, in raw_decode obj, end = self.scan_once(s, idx) ^^^^^^^^^^^^^^^^^^^^^^ json.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 2 (char 1)

Reproduce

above command

Expected behavior

above

Screenshots

No response

Open Interpreter version

0.4.3

Python version

3.12.4

Operating System name and version

macOS 13

Additional context

No response

meetr1912 avatar Oct 31 '24 15:10 meetr1912

Can you please try again with Python 3.11 or 3.10

https://docs.openinterpreter.com/getting-started/setup

MikeBirdTech avatar Oct 31 '24 15:10 MikeBirdTech

Same issues to use llama3.2:1b

$ interpreter --local

Open Interpreter supports multiple local model providers.

[?] Select a provider:

Ollama Llamafile LM Studio Jan

[?] Select a model:

llama3.2:1b llama3.2:3b llava-llama3 llama3.1:8b phi3:3.8b nomic-embed-text qwen2:7b ↓ Download llama3.1 ↓ Download phi3 ↓ Download mistral-nemo ↓ Download gemma2 ↓ Download codestral Browse Models ↗

Loading llama3.2:1b...

Traceback (most recent call last): File "/Users/niehu/miniforge3/envs/open_interpreter/bin/interpreter", line 8, in sys.exit(main()) File "/Users/niehu/miniforge3/envs/open_interpreter/lib/python3.10/site-packages/interpreter/terminal_interface/start_terminal_interface.py", line 612, in main start_terminal_interface(interpreter) File "/Users/niehu/miniforge3/envs/open_interpreter/lib/python3.10/site-packages/interpreter/terminal_interface/start_terminal_interface.py", line 471, in start_terminal_interface interpreter = profile( File "/Users/niehu/miniforge3/envs/open_interpreter/lib/python3.10/site-packages/interpreter/terminal_interface/profiles/profiles.py", line 64, in profile return apply_profile(interpreter, profile, profile_path) File "/Users/niehu/miniforge3/envs/open_interpreter/lib/python3.10/site-packages/interpreter/terminal_interface/profiles/profiles.py", line 148, in apply_profile exec(profile["start_script"], scope, scope) File "", line 1, in File "/Users/niehu/miniforge3/envs/open_interpreter/lib/python3.10/site-packages/interpreter/core/core.py", line 145, in local_setup self = local_setup(self) File "/Users/niehu/miniforge3/envs/open_interpreter/lib/python3.10/site-packages/interpreter/terminal_interface/local_setup.py", line 314, in local_setup interpreter.computer.ai.chat("ping") File "/Users/niehu/miniforge3/envs/open_interpreter/lib/python3.10/site-packages/interpreter/core/computer/ai/ai.py", line 134, in chat for chunk in self.computer.interpreter.llm.run(messages): File "/Users/niehu/miniforge3/envs/open_interpreter/lib/python3.10/site-packages/interpreter/core/llm/llm.py", line 86, in run self.load() File "/Users/niehu/miniforge3/envs/open_interpreter/lib/python3.10/site-packages/interpreter/core/llm/llm.py", line 397, in load self.interpreter.computer.ai.chat("ping") File "/Users/niehu/miniforge3/envs/open_interpreter/lib/python3.10/site-packages/interpreter/core/computer/ai/ai.py", line 134, in chat for chunk in self.computer.interpreter.llm.run(messages): File "/Users/niehu/miniforge3/envs/open_interpreter/lib/python3.10/site-packages/interpreter/core/llm/llm.py", line 322, in run yield from run_tool_calling_llm(self, params) File "/Users/niehu/miniforge3/envs/open_interpreter/lib/python3.10/site-packages/interpreter/core/llm/run_tool_calling_llm.py", line 178, in run_tool_calling_llm for chunk in llm.completions(**request_params): File "/Users/niehu/miniforge3/envs/open_interpreter/lib/python3.10/site-packages/interpreter/core/llm/llm.py", line 466, in fixed_litellm_completions raise first_error # If all attempts fail, raise the first error File "/Users/niehu/miniforge3/envs/open_interpreter/lib/python3.10/site-packages/interpreter/core/llm/llm.py", line 443, in fixed_litellm_completions yield from litellm.completion(**params) File "/Users/niehu/miniforge3/envs/open_interpreter/lib/python3.10/site-packages/litellm/llms/ollama.py", line 428, in ollama_completion_stream raise e File "/Users/niehu/miniforge3/envs/open_interpreter/lib/python3.10/site-packages/litellm/llms/ollama.py", line 406, in ollama_completion_stream function_call = json.loads(response_content) File "/Users/niehu/miniforge3/envs/open_interpreter/lib/python3.10/json/init.py", line 346, in loads return _default_decoder.decode(s) File "/Users/niehu/miniforge3/envs/open_interpreter/lib/python3.10/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/Users/niehu/miniforge3/envs/open_interpreter/lib/python3.10/json/decoder.py", line 353, in raw_decode obj, end = self.scan_once(s, idx) json.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 2 (char 1)

niehu2018 avatar Nov 01 '24 05:11 niehu2018

I got the same error for both 3.10 and 3.11

vrijmetse avatar Nov 02 '24 12:11 vrijmetse

Yes. Same here with both Versions of 3.10 and 3.11 on Windows 11. Don´t know what to do after reinstall Python and OI. 😊

Open Interpreter supports multiple local model providers.

[?] Select a provider:

Ollama Llamafile LM Studio Jan

[?] Select a model: llama3.2:1b llama3-groq-tool-use llama3.1:8b llama3.1 llama3.2

qwen2.5-coder deepseek-coder-v2 mistral nemotron-mini qwen2.5:7b starcoder2:3b gemma2 codegemma

Loading qwen2.5-coder...

Traceback (most recent call last): File "", line 198, in run_module_as_main File "", line 88, in run_code File "D:\OpenInterpreter\oi_venv\Scripts\interpreter.exe_main.py", line 7, in File "D:\OpenInterpreter\oi_venv\Lib\site-packages\interpreter\terminal_interface\start_terminal_interface.py", line 612, in main start_terminal_interface(interpreter) File "D:\OpenInterpreter\oi_venv\Lib\site-packages\interpreter\terminal_interface\start_terminal_interface.py", line 471, in start_terminal_interface interpreter = profile( ^^^^^^^^ File "D:\OpenInterpreter\oi_venv\Lib\site-packages\interpreter\terminal_interface\profiles\profiles.py", line 64, in profile return apply_profile(interpreter, profile, profile_path) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\OpenInterpreter\oi_venv\Lib\site-packages\interpreter\terminal_interface\profiles\profiles.py", line 148, in apply_profile exec(profile["start_script"], scope, scope) File "", line 1, in File "D:\OpenInterpreter\oi_venv\Lib\site-packages\interpreter\core\core.py", line 145, in local_setup self = local_setup(self) ^^^^^^^^^^^^^^^^^ File "D:\OpenInterpreter\oi_venv\Lib\site-packages\interpreter\terminal_interface\local_setup.py", line 314, in local_setup interpreter.computer.ai.chat("ping") File "D:\OpenInterpreter\oi_venv\Lib\site-packages\interpreter\core\computer\ai\ai.py", line 134, in chat for chunk in self.computer.interpreter.llm.run(messages): File "D:\OpenInterpreter\oi_venv\Lib\site-packages\interpreter\core\llm\llm.py", line 86, in run self.load() File "D:\OpenInterpreter\oi_venv\Lib\site-packages\interpreter\core\llm\llm.py", line 397, in load self.interpreter.computer.ai.chat("ping") File "D:\OpenInterpreter\oi_venv\Lib\site-packages\interpreter\core\computer\ai\ai.py", line 134, in chat for chunk in self.computer.interpreter.llm.run(messages): File "D:\OpenInterpreter\oi_venv\Lib\site-packages\interpreter\core\llm\llm.py", line 322, in run yield from run_tool_calling_llm(self, params) File "D:\OpenInterpreter\oi_venv\Lib\site-packages\interpreter\core\llm\run_tool_calling_llm.py", line 178, in run_tool_calling_llm for chunk in llm.completions(**request_params): File "D:\OpenInterpreter\oi_venv\Lib\site-packages\interpreter\core\llm\llm.py", line 466, in fixed_litellm_completions raise first_error # If all attempts fail, raise the first error ^^^^^^^^^^^^^^^^^ File "D:\OpenInterpreter\oi_venv\Lib\site-packages\interpreter\core\llm\llm.py", line 443, in fixed_litellm_completions yield from litellm.completion(**params) File "D:\OpenInterpreter\oi_venv\Lib\site-packages\litellm\llms\ollama.py", line 428, in ollama_completion_stream raise e File "D:\OpenInterpreter\oi_venv\Lib\site-packages\litellm\llms\ollama.py", line 406, in ollama_completion_stream function_call = json.loads(response_content) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Grunkah\AppData\Local\Programs\Python\Python311\Lib\json_init.py", line 346, in loads return _default_decoder.decode(s) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Grunkah\AppData\Local\Programs\Python\Python311\Lib\json\decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Grunkah\AppData\Local\Programs\Python\Python311\Lib\json\decoder.py", line 353, in raw_decode obj, end = self.scan_once(s, idx) ^^^^^^^^^^^^^^^^^^^^^^ json.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 2 (char 1) (oi_venv) PS D:\OpenInterpreter>

Grunkah avatar Nov 03 '24 20:11 Grunkah

Please run pip install 'open-interpreter[local]'

If the issue persists, please share the output of interpreter --version and ollama --version

MikeBirdTech avatar Nov 04 '24 14:11 MikeBirdTech

Please run pip install open-interpreter[local]

If the issue persists, please share the output of interpreter --version and ollama --version

Sadly wont Work :(

I`d had previous Python 3.12 installed on the Local Maschine but i changed it to 3.11 and delete all dependencies of 3.12.

-> Sandbox:

I have installed it on a Sandbox. Sadly wont work. But i get an other Error Mesage.

-> Local:

(oi_venv) PS D:\OpenInterpreter> interpreter --version Open Interpreter 0.4.3 Developer Preview (oi_venv) PS D:\OpenInterpreter> ollama --version ollama version is 0.3.14 (oi_venv) PS D:\OpenInterpreter> python --version Python 3.11.0

  • While Installation:

"DEPRECATION: wget is being installed using the legacy 'setup.py install' method, because it does not have a 'pyproject.toml' and the 'wheel' package is not installed. pip 23.1 will enforce this behaviour change. A possible replacement is to enable the '--use-pep517' option. Discussion can be found at https://github.com/pypa/pip/issues/8559 Running setup.py install for wget ... done DEPRECATION: pyperclip is being installed using the legacy 'setup.py install' method, because it does not have a 'pyproject.toml' and the 'wheel' package is not installed. pip 23.1 will enforce this behaviour change. A possible replacement is to enable the '--use-pep517' option. Discussion can be found at https://github.com/pypa/pip/issues/8559"

Edit: After new Installment with python -m pip install --upgrade pip, Error above won`t occur. Same Error if i started the --local.

Grunkah avatar Nov 04 '24 20:11 Grunkah

@Grunkah

Sadly wont Work :(

What won't work?

MikeBirdTech avatar Nov 04 '24 21:11 MikeBirdTech

@Grunkah

Sadly wont Work :(

What won't work?

I've tried the approach suggested above, so far and have also recently reinstalled Open-Interpreter a few times. I've also reinstalled Python in an attempt to resolve the issue.

After multiple attempts to install Open-Interpreter and starting the "Interpreter --local" just result in the same Error from above.

I'm starting to suspect that there might be a configuration problem with my Windows 11 installation. Unfortunately, I'm not sure what this would entail or how to fix it. I don't want to reinstall Windows, if fixing the issue is an option,

To be honest, I'm getting frustrated with the issues caused by Windows 11 again - it's not the first time I've encountered problems like this due to its quirks. Last time, it was related to PyTorch 😂. Guess how i fixed it.

Grunkah avatar Nov 04 '24 21:11 Grunkah

Loading llama3.2:3b...

Traceback (most recent call last): File "/home/tyson/open-interpreter/.env/bin/interpreter", line 8, in sys.exit(main()) ^^^^^^ File "/home/tyson/open-interpreter/.env/lib/python3.12/site-packages/interpreter/terminal_interface/start_terminal_interface.py", line 612, in main start_terminal_interface(interpreter) File "/home/tyson/open-interpreter/.env/lib/python3.12/site-packages/interpreter/terminal_interface/start_terminal_interface.py", line 471, in start_terminal_interface interpreter = profile( ^^^^^^^^ File "/home/tyson/open-interpreter/.env/lib/python3.12/site-packages/interpreter/terminal_interface/profiles/profiles.py", line 64, in profile return apply_profile(interpreter, profile, profile_path) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tyson/open-interpreter/.env/lib/python3.12/site-packages/interpreter/terminal_interface/profiles/profiles.py", line 148, in apply_profile exec(profile["start_script"], scope, scope) File "", line 1, in File "/home/tyson/open-interpreter/.env/lib/python3.12/site-packages/interpreter/core/core.py", line 145, in local_setup self = local_setup(self) ^^^^^^^^^^^^^^^^^ File "/home/tyson/open-interpreter/.env/lib/python3.12/site-packages/interpreter/terminal_interface/local_setup.py", line 314, in local_setup interpreter.computer.ai.chat("ping") File "/home/tyson/open-interpreter/.env/lib/python3.12/site-packages/interpreter/core/computer/ai/ai.py", line 134, in chat for chunk in self.computer.interpreter.llm.run(messages): File "/home/tyson/open-interpreter/.env/lib/python3.12/site-packages/interpreter/core/llm/llm.py", line 86, in run self.load() File "/home/tyson/open-interpreter/.env/lib/python3.12/site-packages/interpreter/core/llm/llm.py", line 397, in load self.interpreter.computer.ai.chat("ping") File "/home/tyson/open-interpreter/.env/lib/python3.12/site-packages/interpreter/core/computer/ai/ai.py", line 134, in chat for chunk in self.computer.interpreter.llm.run(messages): File "/home/tyson/open-interpreter/.env/lib/python3.12/site-packages/interpreter/core/llm/llm.py", line 322, in run yield from run_tool_calling_llm(self, params) File "/home/tyson/open-interpreter/.env/lib/python3.12/site-packages/interpreter/core/llm/run_tool_calling_llm.py", line 178, in run_tool_calling_llm for chunk in llm.completions(**request_params): File "/home/tyson/open-interpreter/.env/lib/python3.12/site-packages/interpreter/core/llm/llm.py", line 466, in fixed_litellm_completions raise first_error # If all attempts fail, raise the first error ^^^^^^^^^^^^^^^^^ File "/home/tyson/open-interpreter/.env/lib/python3.12/site-packages/interpreter/core/llm/llm.py", line 443, in fixed_litellm_completions yield from litellm.completion(**params) File "/home/tyson/open-interpreter/.env/lib/python3.12/site-packages/litellm/llms/ollama.py", line 428, in ollama_completion_stream raise e File "/home/tyson/open-interpreter/.env/lib/python3.12/site-packages/litellm/llms/ollama.py", line 406, in ollama_completion_stream function_call = json.loads(response_content) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/json/init.py", line 346, in loads return _default_decoder.decode(s) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/json/decoder.py", line 353, in raw_decode obj, end = self.scan_once(s, idx) ^^^^^^^^^^^^^^^^^^^^^^ json.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 2 (char 1) (.env) tyson@tyson-b760mds3h:~/open-interpreter$ interpreter --version Open Interpreter 0.4.3 Developer Preview (.env) tyson@tyson-b760mds3h:~/open-interpreter$ ollama --version ollama version is 0.4.0

tysonchamp avatar Nov 08 '24 06:11 tysonchamp

same issue not working

tysonchamp avatar Nov 08 '24 06:11 tysonchamp

Add: --no-llm_supports_functions When launching interpreter

Notnaton avatar Nov 08 '24 12:11 Notnaton

Basically all Ollama models are failing to run. Some even load, but they all crash after inserting the prompt.

telmob avatar Nov 12 '24 12:11 telmob

Basically all Ollama models are failing to run. Some even load, but they all crash after inserting the prompt.

See my comment above https://github.com/OpenInterpreter/open-interpreter/issues/1514#issuecomment-2464606167

Will be fixed next release #1524

Notnaton avatar Nov 12 '24 12:11 Notnaton

See my comment above #1514 (comment)

I've tried that option. It's still not working.

Will be fixed next release #1524

Great. Thanks!

telmob avatar Nov 12 '24 12:11 telmob

Add: --no-llm_supports_functions When launching interpreter

Thanks! it works! Adding these parameter is useful. command: interpreter --local --no-llm_supports_functions select "ollama"and the model that you want

bg9cxn avatar Nov 16 '24 14:11 bg9cxn

i meet the same question JSON err,interpreter --local --no-llm_supports_functions it is useful,but if it can make difference as openai?

1caiji23 avatar Nov 30 '24 09:11 1caiji23

there is a question that i must use interpreter --local --no-llm_supports_functions to add LLM when i use,as if computer forget the
config

1caiji23 avatar Dec 02 '24 07:12 1caiji23

This will be fixed next update I believe

Notnaton avatar Dec 05 '24 11:12 Notnaton

How can I use "interpreter --local --no-llm_supports_functions" inside python code currently interpreter.llm.model = "ollama/llama3.2" # Specific configuration may vary interpreter.llm.api_base = "http://localhost:11434" # Typical Ollama local endpoint #interpreter.llm.api_key = "your_api_key_if_required"

Start interactive session

interpreter.chat()

mordsm avatar Dec 09 '24 12:12 mordsm

interpreter.llm.supports_functions = False

Notnaton avatar Dec 10 '24 11:12 Notnaton

thanks it worked

On Tue, Dec 10, 2024 at 1:20 PM Anton Solbjørg @.***> wrote:

interpreter.llm.supports_functions = False

— Reply to this email directly, view it on GitHub https://github.com/OpenInterpreter/open-interpreter/issues/1514#issuecomment-2531286241, or unsubscribe https://github.com/notifications/unsubscribe-auth/AA3SS7DDIORNHSPXJFHXW2D2E3E6ZAVCNFSM6AAAAABQ6QMPRWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKMZRGI4DMMRUGE . You are receiving this because you commented.Message ID: @.***>

-- Moshe Sharon

050 - 6562311

mordsm avatar Dec 10 '24 12:12 mordsm