open-interpreter icon indicating copy to clipboard operation
open-interpreter copied to clipboard

TypeError: sequence item 1: expected str instance, NoneType found

Open gokulcoder7 opened this issue 1 year ago • 5 comments

Describe the bug

Loading qwen2.5-coder:14b...

Traceback (most recent call last): File "", line 198, in _run_module_as_main File "", line 88, in run_code File "C:\Python312\Scripts\interpreter.exe_main.py", line 7, in File "C:\Python312\Lib\site-packages\interpreter\terminal_interface\start_terminal_interface.py", line 612, in main start_terminal_interface(interpreter) File "C:\Python312\Lib\site-packages\interpreter\terminal_interface\start_terminal_interface.py", line 471, in start_terminal_interface interpreter = profile( ^^^^^^^^ File "C:\Python312\Lib\site-packages\interpreter\terminal_interface\profiles\profiles.py", line 64, in profile return apply_profile(interpreter, profile, profile_path) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Python312\Lib\site-packages\interpreter\terminal_interface\profiles\profiles.py", line 148, in apply_profile exec(profile["start_script"], scope, scope) File "", line 1, in File "C:\Python312\Lib\site-packages\interpreter\core\core.py", line 145, in local_setup self = local_setup(self) ^^^^^^^^^^^^^^^^^ File "C:\Python312\Lib\site-packages\interpreter\terminal_interface\local_setup.py", line 314, in local_setup interpreter.computer.ai.chat("ping") File "C:\Python312\Lib\site-packages\interpreter\core\computer\ai\ai.py", line 134, in chat for chunk in self.computer.interpreter.llm.run(messages): ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Python312\Lib\site-packages\interpreter\core\llm\llm.py", line 86, in run self.load() File "C:\Python312\Lib\site-packages\interpreter\core\llm\llm.py", line 397, in load self.interpreter.computer.ai.chat("ping") File "C:\Python312\Lib\site-packages\interpreter\core\computer\ai\ai.py", line 134, in chat for chunk in self.computer.interpreter.llm.run(messages): ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Python312\Lib\site-packages\interpreter\core\llm\llm.py", line 322, in run yield from run_tool_calling_llm(self, params) File "C:\Python312\Lib\site-packages\interpreter\core\llm\run_tool_calling_llm.py", line 178, in run_tool_calling_llm for chunk in llm.completions(**request_params): ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Python312\Lib\site-packages\interpreter\core\llm\llm.py", line 466, in fixed_litellm_completions raise first_error # If all attempts fail, raise the first error ^^^^^^^^^^^^^^^^^ File "C:\Python312\Lib\site-packages\interpreter\core\llm\llm.py", line 443, in fixed_litellm_completions yield from litellm.completion(**params) File "C:\Python312\Lib\site-packages\litellm\llms\ollama.py", line 427, in ollama_completion_stream raise e File "C:\Python312\Lib\site-packages\litellm\llms\ollama.py", line 403, in ollama_completion_stream response_content = "".join(content_chunks) ^^^^^^^^^^^^^^^^^^^^^^^ TypeError: sequence item 1: expected str instance, NoneType found

(langchain) C:\Windows\System32>

i want to use ollama model qwen2.5-coder:14b with open interpreter its giving error: TypeError: sequence item 1: expected str instance, NoneType found

Reproduce

1.run interpreter --local 2.select ollama 3.select model: qwen2.5-coder:14b image_2024-11-27_15-56-49

Expected behavior

i want to run qwen2.5-coder:14b

Screenshots

image_2024-11-27_16-01-28 image_2024-11-27_15-56-49

Open Interpreter version

0.4.3

Python version

3.12.5

Operating System name and version

Windows 11

Additional context

gokulcoder7 avatar Nov 27 '24 12:11 gokulcoder7

hi

gokulcoder7 avatar Nov 27 '24 12:11 gokulcoder7

(myenv) F:\for open interpreter>interpreter --version Open Interpreter 0.4.3 Developer Preview

gokulcoder7 avatar Dec 06 '24 06:12 gokulcoder7

Loading qwen2.5-coder:14b...

Traceback (most recent call last): File "", line 198, in run_module_as_main File "", line 88, in run_code File "F:\for open interpreter\myenv\Scripts\interpreter.exe_main.py", line 7, in File "F:\for open interpreter\myenv\Lib\site-packages\interpreter\terminal_interface\start_terminal_interface.py", line 612, in main start_terminal_interface(interpreter) File "F:\for open interpreter\myenv\Lib\site-packages\interpreter\terminal_interface\start_terminal_interface.py", line 471, in start_terminal_interface interpreter = profile( ^^^^^^^^ File "F:\for open interpreter\myenv\Lib\site-packages\interpreter\terminal_interface\profiles\profiles.py", line 64, in profile return apply_profile(interpreter, profile, profile_path) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\for open interpreter\myenv\Lib\site-packages\interpreter\terminal_interface\profiles\profiles.py", line 148, in apply_profile exec(profile["start_script"], scope, scope) File "", line 1, in File "F:\for open interpreter\myenv\Lib\site-packages\interpreter\core\core.py", line 145, in local_setup self = local_setup(self) ^^^^^^^^^^^^^^^^^ File "F:\for open interpreter\myenv\Lib\site-packages\interpreter\terminal_interface\local_setup.py", line 314, in local_setup interpreter.computer.ai.chat("ping") File "F:\for open interpreter\myenv\Lib\site-packages\interpreter\core\computer\ai\ai.py", line 134, in chat for chunk in self.computer.interpreter.llm.run(messages): ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\for open interpreter\myenv\Lib\site-packages\interpreter\core\llm\llm.py", line 86, in run self.load() File "F:\for open interpreter\myenv\Lib\site-packages\interpreter\core\llm\llm.py", line 397, in load self.interpreter.computer.ai.chat("ping") File "F:\for open interpreter\myenv\Lib\site-packages\interpreter\core\computer\ai\ai.py", line 134, in chat for chunk in self.computer.interpreter.llm.run(messages): ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\for open interpreter\myenv\Lib\site-packages\interpreter\core\llm\llm.py", line 322, in run yield from run_tool_calling_llm(self, params) File "F:\for open interpreter\myenv\Lib\site-packages\interpreter\core\llm\run_tool_calling_llm.py", line 178, in run_tool_calling_llm for chunk in llm.completions(**request_params): ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\for open interpreter\myenv\Lib\site-packages\interpreter\core\llm\llm.py", line 466, in fixed_litellm_completions raise first_error # If all attempts fail, raise the first error ^^^^^^^^^^^^^^^^^ File "F:\for open interpreter\myenv\Lib\site-packages\interpreter\core\llm\llm.py", line 443, in fixed_litellm_completions yield from litellm.completion(**params) File "F:\for open interpreter\myenv\Lib\site-packages\litellm\llms\ollama.py", line 455, in ollama_completion_stream raise e File "F:\for open interpreter\myenv\Lib\site-packages\litellm\llms\ollama.py", line 433, in ollama_completion_stream function_call = json.loads(response_content) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Python312\Lib\json_init.py", line 346, in loads return _default_decoder.decode(s) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Python312\Lib\json\decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Python312\Lib\json\decoder.py", line 353, in raw_decode obj, end = self.scan_once(s, idx) ^^^^^^^^^^^^^^^^^^^^^^ json.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 2 (char 1)

gokulcoder7 avatar Dec 06 '24 07:12 gokulcoder7

I am facing the same issue, here are the reproducible steps:

uv init test-interpreter
cd test-interpreter
uv add open-interpreter
uv run interpreter --local

then choose ollama and both qwq and llama3.2 gives same error as above.

uv version = 0.5.6 (installed with homebrew) python = 3.12.7 (I also tried 3.10 and 3.11, both also give same error) open-interpreter = 0.4.3 developer preview

nvm, found the solution in another issue, run uv run interpreter --local --no-llm_supports_functions and it works

https://github.com/OpenInterpreter/open-interpreter/issues/1514#issuecomment-2480602935

marcoleewow avatar Dec 06 '24 22:12 marcoleewow

thanks for your reply

gokulcoder7 avatar Dec 15 '24 01:12 gokulcoder7