open-interpreter
open-interpreter copied to clipboard
🌋 LLaVA: Large Language and Vision Assistant support
Is your feature request related to a problem? Please describe.
OpenAI is cool but it's also very expensive.
Describe the solution you'd like
LLAVA could be a great candidate as an alternative to GPT4V. https://huggingface.co/mys/ggml_llava-v1.5-7b
I was able to load it through LMStudio but unfortunately it crashes and it requires more work.
Describe alternatives you've considered
No response
Additional context
No response
Hey there, @ilteris!
Do you have any more details about what you’re asking for from the project/community here?
Are you saying the model crashes when you try to run it in general or that it specifically crashes when you try to use it with Open Interpreter?
Thank you for the response @ericrallen . I am trying to run this local vision model and get open interpreter to interpret that and turn into code. I start very basic. Just understand what the image is first.
Here's my prompt and output:
Output: `Traceback (most recent call last): File "/Users/ilteris/.venv/lib/python3.11/site-packages/interpreter/core/respond.py", line 49, in respond for chunk in interpreter._llm(messages_for_llm): File "/Users/ilteris/.venv/lib/python3.11/site-packages/interpreter/llm/convert_to_coding_llm.py", line 65, in coding_llm for chunk in text_llm(messages): ^^^^^^^^^^^^^^^^^^ File "/Users/ilteris/.venv/lib/python3.11/site-packages/interpreter/llm/setup_text_llm.py", line 130, in base_llm return openai.ChatCompletion.create(**params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/ilteris/.venv/lib/python3.11/site-packages/openai/api_resources/chat_completion.py", line 25, in create return super().create(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/ilteris/.venv/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 155, in create response, _, api_key = requestor.request( ^^^^^^^^^^^^^^^^^^ File "/Users/ilteris/.venv/lib/python3.11/site-packages/openai/api_requestor.py", line 299, in request resp, got_stream = self._interpret_response(result, stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/ilteris/.venv/lib/python3.11/site-packages/openai/api_requestor.py", line 710, in _interpret_response self._interpret_response_line( File "/Users/ilteris/.venv/lib/python3.11/site-packages/openai/api_requestor.py", line 775, in _interpret_response_line raise self.handle_error_response( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/ilteris/.venv/lib/python3.11/site-packages/openai/api_requestor.py", line 428, in handle_error_response error_code=error_data.get("code"), ^^^^^^^^^^^^^^ AttributeError: 'str' object has no attribute 'get'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/ilteris/.venv/bin/interpreter", line 8, in
Please make sure LM Studio's local server is running by following the steps above.
If LM Studio's local server is running, please try a language model with a different architecture.`
I am running LMStudio with https://huggingface.co/mys/ggml_llava-v1.5-7b/resolve/main/ggml-model-q5_k.gguf model
Just to double-check, this prompt works when sent to the model directly without Open Interpreter?
Closing this stale issue. Please create a new issue if the problem is not resolved or explained in the documentation. Thanks!