open-interpreter icon indicating copy to clipboard operation
open-interpreter copied to clipboard

[llama-3.1 70B]Open Interpreter's Preps did not complete after setting the model

Open mickitty0511 opened this issue 1 year ago • 12 comments

Describe the bug

When reading your official doc about using ollama's module, I tried using llama 3.1 for open interpreter. However, errors were produced during the preps made after the module setup. I need a detailed resolution or explanation about what happened in my case. I hope some developers would reproduce this error and then tell me about this case.

Reproduce

Follow your official doc

Used this command

  • ollama run llama3.1
  • interpreter --model ollama/llama3.1

Then, open interpreter asked me if I would have new profile file. I did answer n.

Then error is as follows.

[2024-07-30T03:56:01Z ERROR cached_path::cache] ETAG fetch for https://huggingface.co/llama3.1/resolve/main/tokenizer.json failed with fatal error Traceback (most recent call last):

json.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 2 (char 1)

Expected behavior

I suppose it should complete the preps based on what I read from your official docs.

Screenshots

No response

Open Interpreter version

0.3.4

Python version

3.11.5

Operating System name and version

Windows 11

Additional context

No response

mickitty0511 avatar Jul 30 '24 05:07 mickitty0511

got the same

uthpala1000 avatar Aug 02 '24 07:08 uthpala1000

same too

HugoQwQ avatar Aug 11 '24 13:08 HugoQwQ

same.

Yisiooo avatar Aug 12 '24 07:08 Yisiooo

It seems doesn't supported llama3.1

HugoQwQ avatar Aug 12 '24 08:08 HugoQwQ

Maybe you can run llama run llama3 and use interpreter --model ollama/llama3 to use

HugoQwQ avatar Aug 12 '24 08:08 HugoQwQ

This PR should fix this issue. https://github.com/OpenInterpreter/open-interpreter/pull/1400

CyanideByte avatar Aug 15 '24 06:08 CyanideByte

I'm still facing this issue on Windows 11, even running the latest OI, Ollama and Llama 3.1 versions.

PS C:\Users\User> interpreter --version
Open Interpreter 0.3.7 The Beginning (Ty and Victor)
PS C:\Users\User> ollama --version
ollama version is 0.3.6
PS C:\Users\User> interpreter --model ollama/llama3.1

▌ Model set to ollama/llama3.1

Loading llama3.1...

Traceback (most recent call last):
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\Scripts\interpreter.exe\__main__.py", line 7, in <module>
    sys.exit(main())
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\interpreter\terminal_interface\start_terminal_interface.py", line 586, in main
    start_terminal_interface(interpreter)
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\interpreter\terminal_interface\start_terminal_interface.py", line 540, in start_terminal_interface
    validate_llm_settings(
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\interpreter\terminal_interface\validate_llm_settings.py", line 110, in validate_llm_settings
    interpreter.llm.load()
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\interpreter\core\llm\llm.py", line 358, in load
    self.interpreter.computer.ai.chat("ping")
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\interpreter\core\computer\ai\ai.py", line 130, in chat
    for chunk in self.computer.interpreter.llm.run(messages):
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\interpreter\core\llm\llm.py", line 291, in run
    yield from run_tool_calling_llm(self, params)
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\interpreter\core\llm\run_tool_calling_llm.py", line 177, in run_tool_calling_llm
    for chunk in llm.completions(**request_params):
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\interpreter\core\llm\llm.py", line 420, in fixed_litellm_completions
    raise first_error  # If all attempts fail, raise the first error
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\interpreter\core\llm\llm.py", line 400, in fixed_litellm_completions
    yield from litellm.completion(**params)
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\litellm\llms\ollama.py", line 370, in ollama_completion_stream
    raise e
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\litellm\llms\ollama.py", line 348, in ollama_completion_stream
    function_call = json.loads(response_content)
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\json\__init__.py", line 346, in loads
    return _default_decoder.decode(s)
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\json\decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\json\decoder.py", line 353, in raw_decode
    obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 2 (char 1)
PS C:\Users\User>

I'm not sure about what I'm doing wrong.

leafarilongamor avatar Aug 19 '24 20:08 leafarilongamor

@mickitty0511 @leafarilongamor

I brought this up internally and it's being worked on! Thanks for raising the issue

MikeBirdTech avatar Aug 19 '24 20:08 MikeBirdTech

Hi @MikeBirdTech, Same issue is on macOS also.

goku@192 ~ % interpreter --version
Open Interpreter 0.3.7 The Beginning (Ty and Victor)
goku@192 ~ % ollama -v
ollama version is 0.3.6

I am on macOS Version 15.0 Beta (24A5309e) if that makes any difference for you. Best!

UltraInstinct0x avatar Aug 20 '24 22:08 UltraInstinct0x

same issue

interpreter --version
Open Interpreter 0.4.3 Developer Preview

ollama -v
ollama version is 0.3.13

wa008 avatar Nov 04 '24 07:11 wa008

@MikeBirdTech @leafarilongamor Did ya'll got the fix? I'm trying to load llama 3.2 and I'm getting the same error

omarnahdi avatar Nov 07 '24 21:11 omarnahdi

This will be fixed with the merge of this PR: https://github.com/OpenInterpreter/open-interpreter/pull/1524

If you want to try it early you can install it like this pip install --upgrade --force-reinstall git+https://github.com/CyanideByte/open-interpreter.git@local-fixes

CyanideByte avatar Nov 07 '24 23:11 CyanideByte