Crashes when using -lsv with LLaVA
Describe the bug
Crashes when using -m ollama/LLaVA -lsv, works fine without -lsv parameter
Reproduce
Run interpreter -m ollama/llava -lsv
Ask for a visual description
Expected behavior
Not a crash
Screenshots
PS C:\Windows\system32> interpreter -lsv -y -m ollama/LLaVA
▌ A new version of Open Interpreter is available.
▌ Please run: pip install --upgrade open-interpreter
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Describe image C:\image_for_describe.jpg
Python Version: 3.12.2
Pip Version: 24.0
Open-interpreter Version: cmd:Interpreter, pkg: 0.2.0
OS Version and Architecture: Windows-11-10.0.22631-SP0
CPU Info: Intel64 Family 6 Model 158 Stepping 10, GenuineIntel
RAM Info: 31.80 GB, used: 11.54, free: 20.26
# Interpreter Info
Vision: True
Model: ollama/LLaVA
Function calling: None
Context window: None
Max tokens: None
Auto run: True
API base: None
Offline: False
Curl output: Not local
# Messages
System Message: You are Open Interpreter, a world-class programmer that can complete any goal by executing code.
First, write a plan. Always recap the plan between each code block (you have extreme short-term memory loss, so you need to recap the plan between each message block to retain it). When you execute code, it will be executed on the user's machine. The user has given you full and complete permission to execute any code necessary to complete the task. Execute the code. If you want to send data between programming languages, save the data to a txt or json. You can access the internet. Run any code to achieve the goal, and if at first you don't succeed, try again and again. You can install new packages. When a user refers to a filename, they're likely referring to an existing file in the directory you're currently executing code in. Write messages to the user in Markdown. In general, try to make plans with as few steps as possible. As for actually executing code to carry out that plan, for stateful languages (like python, javascript, shell, but NOT for html which starts from 0 every time) it's critical not to try to do everything in one code block. You should try something, print information about it, then continue from there in tiny, informed steps. You will never get it on the first try, and attempting it in one go will often lead to errors you cant see. You are capable of any task.
{'role': 'user', 'type': 'message', 'content': 'Describe image C:\\image_for_describe.jpg'}
{'role': 'user', 'type': 'image', 'format': 'path', 'content': 'C:\image_for_describe.jpg'}
Traceback (most recent call last):
File "C:\Program Files\Python312\Lib\site-packages\interpreter\core\llm\llm.py", line 221, in fixed_litellm_completions
yield from litellm.completion(**params)
File "C:\Program Files\Python312\Lib\site-packages\litellm\llms\ollama.py", line 260, in ollama_completion_stream
raise e
File "C:\Program Files\Python312\Lib\site-packages\litellm\llms\ollama.py", line 248, in ollama_completion_stream
status_code=response.status_code, message=response.text
^^^^^^^^^^^^^
File "C:\Program Files\Python312\Lib\site-packages\httpx_models.py", line 576, in text
content = self.content
^^^^^^^^^^^^
File "C:\Program Files\Python312\Lib\site-packages\httpx_models.py", line 570, in content
raise ResponseNotRead()
httpx.ResponseNotRead: Attempted to access streaming response content, without having called read().
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "read().
Open Interpreter version
0.2.0
Python version
3.12.2
Operating System name and version
Windows 11 Home 64b
Additional context
No response
I don't think python 3.12 is supported yet. Please downgrade to 3.11
~What is the -lsv flag?
I can't find it in the docs.~
Nvm, try -lsv true
I got this error interpreter: error: unrecognized arguments: true
Same error on 3.11
Same error with me , Python 3.11.5 , Windows
Try a smaller model, and I have also encountered this issue, which seems to be related to the response speed of the model, will report this if its generation speed is slow
Answer translated by software
Try a smaller model, and I have also encountered this issue, which seems to be related to the response speed of the model, will report this if its generation speed is slow
Answer translated by software
What model/ollama,lm-studio do you recommend?
It seems that you are having a problem with an httpx.ResponseNotRead exception when trying to access the contents of a streaming response without having called the read() method. This error occurs when you try to get the content of an HTTP response before the response has been completely read. Here is a step-by-step plan to solve this problem:
Verify the Response: Make sure that the HTTP response you are trying to read is a streaming response and is ready to be read. Calling the Read Method: Before accessing the content of the response, you must call the read() method on the response object to ensure that the content has been fully loaded. Exception Handling: Implement proper exception handling to catch and handle any errors that may occur while reading the response. Here is an example of how you could modify your code to handle this error:
Piton
import httpx
attempt: with httpx.Client() as client: response = client.get('your_url_here') response.read() # Make sure to call read() before accessing the content content = response.text # Process content here except httpx.ResponseNotRead as e: print(f"Error reading response: {e}")
AI generated code. Review and use carefully. This is a general example and you will need to adapt it to your specific situation.