open-interpreter icon indicating copy to clipboard operation
open-interpreter copied to clipboard

Low tolerance to network connection issues

Open shuther opened this issue 1 year ago • 0 comments

Describe the bug

Trying to run it against a not running API gateway is producing a lot of errors that are not self explanatory. Ideally, we should have only one trace?

Reproduce

Run the code below with xxx not a proper ollama server (not reachable)

interpreter -dt --model ollama/mistral --api_base http://xxx:11434/v1

Expected behavior

error is caught and only one message about: LLM not reachable is printed?

Screenshots

No response

Open Interpreter version

0.2.2

Python version

Python 3.12.2 (python3 -V)

Operating System name and version

OSX

Additional context

Actual Trace

 - create a program to test if a german sentence is valid (no grammatical issues)

We were unable to determine the context window of this model. Defaulting to 3000.                                                                                                       

If your model can handle more, run interpreter --context_window {token limit} --max_tokens {max tokens per response}.                                                                   

Continuing...                                                                                                                                                                           


        Python Version: 3.11.8
        Pip Version: 24.0
        Open-interpreter Version: cmd: Open Interpreter 0.2.2 New Computer Update
, pkg: 0.2.2
        OS Version and Architecture: macOS-14.3.1-arm64-arm-64bit
        CPU Info: arm
        RAM Info: 8.00 GB, used: 3.08, free: 0.10
        

        # Interpreter Info
        
        Vision: False
        Model: ollama/mistral
        Function calling: None
        Context window: None
        Max tokens: None

        Auto run: False
        API base: http://linuxmain:11434/v1
        Offline: False

        Curl output: Not local

        # Messages

        System Message: You are Open Interpreter, a world-class programmer that can complete any goal by executing code.
First, write a plan. **Always recap the plan between each code block** (you have extreme short-term memory loss, so you need to recap the plan between each message block to retain it).
When you execute code, it will be executed **on the user's machine**. The user has given you **full and complete permission** to execute any code necessary to complete the task. Execute the code.
If you want to send data between programming languages, save the data to a txt or json.
You can access the internet. Run **any code** to achieve the goal, and if at first you don't succeed, try again and again.
You can install new packages.
When a user refers to a filename, they're likely referring to an existing file in the directory you're currently executing code in.
Write messages to the user in Markdown.
In general, try to **make plans** with as few steps as possible. As for actually executing code to carry out that plan, for *stateful* languages (like python, javascript, shell, but NOT for html which starts from 0 every time) **it's critical not to try to do everything in one code block.** You should try something, print information about it, then continue from there in tiny, informed steps. You will never get it on the first try, and attempting it in one go will often lead to errors you cant see.
You are capable of **any** task.

# THE COMPUTER API

A python `computer` module is ALREADY IMPORTED, and can be used for many tasks:

```python
computer.browser.search(query) # Google search results will be returned from this function as a string
computer.files.edit(path_to_file, original_text, replacement_text) # Edit a file
computer.calendar.create_event(title="Meeting", start_date=datetime.datetime.now(), end=datetime.datetime.now() + datetime.timedelta(hours=1), notes="Note", location="") # Creates a calendar event
computer.calendar.get_events(start_date=datetime.date.today(), end_date=None) # Get events between dates. If end_date is None, only gets events for start_date
computer.calendar.delete_event(event_title="Meeting", start_date=datetime.datetime) # Delete a specific event with a matching title and start date, you may need to get use get_events() to find the specific event object first
computer.contacts.get_phone_number("John Doe")
computer.contacts.get_email_address("John Doe")
computer.mail.send("[email protected]", "Meeting Reminder", "Reminder that our meeting is at 3pm today.", ["path/to/attachment.pdf", "path/to/attachment2.pdf"]) # Send an email with a optional attachments
computer.mail.get(4, unread=True) # Returns the {number} of unread emails, or all emails if False is passed
computer.mail.unread_count() # Returns the number of unread emails
computer.sms.send("555-123-4567", "Hello from the computer!") # Send a text message. MUST be a phone number, so use computer.contacts.get_phone_number frequently here

Do not import the computer module, or any of its sub-modules. They are already imported.

User Info{{import getpass import os import platform}} Name: {{getpass.getuser()}} CWD: {{os.getcwd()}} SHELL: {{os.environ.get('SHELL')}} OS: {{platform.system()}}"

    {'role': 'user', 'type': 'message', 'content': 'create a program to test if a german sentence is valid (no gramatical issues)'}

Traceback (most recent call last): File "/opt/homebrew/lib/python3.11/site-packages/litellm/llms/openai.py", line 374, in completion raise e File "/opt/homebrew/lib/python3.11/site-packages/litellm/llms/openai.py", line 292, in completion return self.streaming( ^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/litellm/llms/openai.py", line 474, in streaming response = openai_client.chat.completions.create(**data, timeout=timeout) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/openai/_utils/_utils.py", line 271, in wrapper return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 643, in create return self._post( ^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/openai/_base_client.py", line 1112, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/openai/_base_client.py", line 859, in request return self._request( ^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/openai/_base_client.py", line 949, in _request raise self._make_status_error_from_response(err.response) from None openai.NotFoundError: Error code: 404 - {'error': {'message': "model 'ollama/mistral' not found, try pulling it first", 'type': 'api_error', 'param': None, 'code': None}}

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/opt/homebrew/lib/python3.11/site-packages/litellm/main.py", line 887, in completion raise e File "/opt/homebrew/lib/python3.11/site-packages/litellm/main.py", line 861, in completion response = openai_chat_completions.completion( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/litellm/llms/openai.py", line 380, in completion raise OpenAIError(status_code=e.status_code, message=str(e)) litellm.llms.openai.OpenAIError: Error code: 404 - {'error': {'message': "model 'ollama/mistral' not found, try pulling it first", 'type': 'api_error', 'param': None, 'code': None}}

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/opt/homebrew/lib/python3.11/site-packages/interpreter/core/llm/llm.py", line 235, in fixed_litellm_completions yield from litellm.completion(**params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/litellm/utils.py", line 2727, in wrapper raise e File "/opt/homebrew/lib/python3.11/site-packages/litellm/utils.py", line 2628, in wrapper result = original_function(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/litellm/main.py", line 1941, in completion raise exception_type( ^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/litellm/utils.py", line 8100, in exception_type raise e File "/opt/homebrew/lib/python3.11/site-packages/litellm/utils.py", line 6928, in exception_type raise NotFoundError( litellm.exceptions.NotFoundError: OpenAIException - Error code: 404 - {'error': {'message': "model 'ollama/mistral' not found, try pulling it first", 'type': 'api_error', 'param': None, 'code': None}}

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/opt/homebrew/bin/interpreter", line 8, in sys.exit(main()) ^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/interpreter/terminal_interface/start_terminal_interface.py", line 415, in main start_terminal_interface(interpreter) File "/opt/homebrew/lib/python3.11/site-packages/interpreter/terminal_interface/start_terminal_interface.py", line 393, in start_terminal_interface interpreter.chat() File "/opt/homebrew/lib/python3.11/site-packages/interpreter/core/core.py", line 154, in chat for _ in self._streaming_chat(message=message, display=display): File "/opt/homebrew/lib/python3.11/site-packages/interpreter/core/core.py", line 183, in _streaming_chat yield from terminal_interface(self, message) File "/opt/homebrew/lib/python3.11/site-packages/interpreter/terminal_interface/terminal_interface.py", line 136, in terminal_interface for chunk in interpreter.chat(message, display=False, stream=True): File "/opt/homebrew/lib/python3.11/site-packages/interpreter/core/core.py", line 222, in _streaming_chat yield from self._respond_and_store() File "/opt/homebrew/lib/python3.11/site-packages/interpreter/core/core.py", line 268, in _respond_and_store for chunk in respond(self): File "/opt/homebrew/lib/python3.11/site-packages/interpreter/core/respond.py", line 68, in respond for chunk in interpreter.llm.run(messages_for_llm): File "/opt/homebrew/lib/python3.11/site-packages/interpreter/core/llm/llm.py", line 207, in run yield from run_text_llm(self, params) File "/opt/homebrew/lib/python3.11/site-packages/interpreter/core/llm/run_text_llm.py", line 19, in run_text_llm for chunk in llm.completions(**params): File "/opt/homebrew/lib/python3.11/site-packages/interpreter/core/llm/llm.py", line 238, in fixed_litellm_completions raise first_error File "/opt/homebrew/lib/python3.11/site-packages/interpreter/core/llm/llm.py", line 219, in fixed_litellm_completions yield from litellm.completion(**params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/litellm/utils.py", line 2727, in wrapper raise e File "/opt/homebrew/lib/python3.11/site-packages/litellm/utils.py", line 2628, in wrapper result = original_function(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/litellm/main.py", line 1941, in completion raise exception_type( ^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/litellm/utils.py", line 8100, in exception_type raise e File "/opt/homebrew/lib/python3.11/site-packages/litellm/utils.py", line 6928, in exception_type raise NotFoundError( litellm.exceptions.NotFoundError: OpenAIException - Error code: 404 - {'error': {'message': "model 'ollama/mistral' not found, try pulling it first", 'type': 'api_error', 'param': None, 'code': None}} ➜ ~ [IPKernelApp] WARNING | Parent appears to have exited, shutting down.

shuther avatar Mar 12 '24 14:03 shuther