open-interpreter icon indicating copy to clipboard operation
open-interpreter copied to clipboard

AzureException when plotting using Azure OpenAI

Open lexnederbragt opened this issue 11 months ago • 0 comments

Describe the bug

Model: gpt4o through Azure OpenAI. When plotting something, the plot appears and Open-Interpreter states "Viewing image... ". After a while, it crashes with an error message:

litellm.exceptions.BadRequestError: litellm.BadRequestError: AzureException BadRequestError - Error code: 400 - {'error': {'message': "Invalid value: 'computer'. Supported values are: 'system', 'assistant', 'user', 'function', and 'tool'.", 'type': 'invalid_request_error', 'param': 'messages[3].role', 'code': 'invalid_value'}}

Reproduce

conda create --name open-interpreter python=3.11
conda activate open-interpreter
pip install open-interpreter[local]
interpreter --no-llm_supports_functions --model azure/My_gpt-4o -ab https://some.openai.azure.com/ -av 2024-05-01-preview -ak my_api_key

Ask for the following:

Write a program that casts two dice 1000 times and plots the distribution of the sum of the cast.

Output:

▌ Model set to azure/My_gpt-4o

Open Interpreter will require approval before running code.

Use interpreter -y to bypass this.

Press CTRL-C to exit.

> Write a program that casts two dice 1000 times and plots the distribution of the sum of the cast.
11:42:04 - LiteLLM:ERROR: utils.py:1830 - Model not found or error in checking vision support. You passed model=My_gpt-4o, custom_llm_provider=azure. Error: This model isn't mapped yet. model=My_gpt-4o, custom_llm_provider=azure. Add it here - https://github.com/BerriAI/litellm/blob/main/model_prices_and_context_window.json.

We were unable to determine the context window of this model. Defaulting to 8000.

If your model can handle more, run interpreter --context_window {token limit} --max_tokens {max tokens per response}.

Continuing...


  To accomplish this, we'll follow these steps:

   1 Write a Python script to simulate casting two dice 1000 times.
   2 Calculate the sum of the values of each cast.
   3 Store the results in a list.
   4 Plot the distribution of the sums using a histogram.

  Here is the code for the program:



  import numpy as np
  import matplotlib.pyplot as plt

  # Number of simulations
  num_simulations = 1000

  # Simulate rolling two dice
  rolls = np.random.randint(1, 7, (num_simulations, 2))

  # Sum the results of each cast
  sums = np.sum(rolls, axis=1)

  # Plot the distribution of sums
  plt.hist(sums, bins=range(2, 14), edgecolor='black', align='left')
  plt.title('Distribution of the Sums of Two Dice Rolls (1000 casts)')
  plt.xlabel('Sum of Dice Rolls')
  plt.ylabel('Frequency')
  plt.xticks(range(2, 13))
  plt.show()


  Would you like to run this code? (y/n)

  y


  import numpy as np
  import matplotlib.pyplot as plt

  # Number of simulations
  num_simulations = 1000

  # Simulate rolling two dice
  rolls = np.random.randint(1, 7, (num_simulations, 2))

  # Sum the results of each cast
  sums = np.sum(rolls, axis=1)

  # Plot the distribution of sums
  plt.hist(sums, bins=range(2, 14), edgecolor='black', align='left')
  plt.title('Distribution of the Sums of Two Dice Rolls (1000 casts)')
  plt.xlabel('Sum of Dice Rolls')
  plt.ylabel('Frequency')
  plt.xticks(range(2, 13))
  plt.show()



Viewing image...

Traceback (most recent call last):
  File "/opt/miniconda3/envs/open-interpreter/lib/python3.11/site-packages/litellm/llms/azure/azure.py", line 427, in
completion
    return self.streaming(
           ^^^^^^^^^^^^^^^
  File "/opt/miniconda3/envs/open-interpreter/lib/python3.11/site-packages/litellm/llms/azure/azure.py", line 702, in
streaming
    headers, response = self.make_sync_azure_openai_chat_completion_request(
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/miniconda3/envs/open-interpreter/lib/python3.11/site-packages/litellm/llms/azure/azure.py", line 300, in
make_sync_azure_openai_chat_completion_request
    raise e
  File "/opt/miniconda3/envs/open-interpreter/lib/python3.11/site-packages/litellm/llms/azure/azure.py", line 292, in
make_sync_azure_openai_chat_completion_request
    raw_response = azure_client.chat.completions.with_raw_response.create(
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/miniconda3/envs/open-interpreter/lib/python3.11/site-packages/openai/_legacy_response.py", line 356, in wrapped
    return cast(LegacyAPIResponse[R], func(*args, **kwargs))
                                      ^^^^^^^^^^^^^^^^^^^^^
  File "/opt/miniconda3/envs/open-interpreter/lib/python3.11/site-packages/openai/_utils/_utils.py", line 275, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/opt/miniconda3/envs/open-interpreter/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 829, in
create
    return self._post(
           ^^^^^^^^^^^
  File "/opt/miniconda3/envs/open-interpreter/lib/python3.11/site-packages/openai/_base_client.py", line 1280, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/miniconda3/envs/open-interpreter/lib/python3.11/site-packages/openai/_base_client.py", line 957, in request
    return self._request(
           ^^^^^^^^^^^^^^
  File "/opt/miniconda3/envs/open-interpreter/lib/python3.11/site-packages/openai/_base_client.py", line 1061, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "Invalid value: 'computer'. Supported values are: 'system',
'assistant', 'user', 'function', and 'tool'.", 'type': 'invalid_request_error', 'param': 'messages[3].role', 'code':
'invalid_value'}}

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/opt/miniconda3/envs/open-interpreter/lib/python3.11/site-packages/litellm/main.py", line 1190, in completion
    response = azure_chat_completions.completion(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/miniconda3/envs/open-interpreter/lib/python3.11/site-packages/litellm/llms/azure/azure.py", line 527, in
completion
    raise AzureOpenAIError(
litellm.llms.azure.common_utils.AzureOpenAIError: Error code: 400 - {'error': {'message': "Invalid value: 'computer'.
Supported values are: 'system', 'assistant', 'user', 'function', and 'tool'.", 'type': 'invalid_request_error', 'param':
'messages[3].role', 'code': 'invalid_value'}}

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/opt/miniconda3/envs/open-interpreter/bin/interpreter", line 8, in <module>
    sys.exit(main())
             ^^^^^^
  File
"/opt/miniconda3/envs/open-interpreter/lib/python3.11/site-packages/interpreter/terminal_interface/start_terminal_interface.py
", line 612, in main
    start_terminal_interface(interpreter)
  File
"/opt/miniconda3/envs/open-interpreter/lib/python3.11/site-packages/interpreter/terminal_interface/start_terminal_interface.py
", line 578, in start_terminal_interface
    interpreter.chat()
  File "/opt/miniconda3/envs/open-interpreter/lib/python3.11/site-packages/interpreter/core/core.py", line 191, in chat
    for _ in self._streaming_chat(message=message, display=display):
  File "/opt/miniconda3/envs/open-interpreter/lib/python3.11/site-packages/interpreter/core/core.py", line 223, in
_streaming_chat
    yield from terminal_interface(self, message)
  File
"/opt/miniconda3/envs/open-interpreter/lib/python3.11/site-packages/interpreter/terminal_interface/terminal_interface.py",
line 162, in terminal_interface
    for chunk in interpreter.chat(message, display=False, stream=True):
  File "/opt/miniconda3/envs/open-interpreter/lib/python3.11/site-packages/interpreter/core/core.py", line 259, in
_streaming_chat
    yield from self._respond_and_store()
  File "/opt/miniconda3/envs/open-interpreter/lib/python3.11/site-packages/interpreter/core/core.py", line 318, in
_respond_and_store
    for chunk in respond(self):
  File "/opt/miniconda3/envs/open-interpreter/lib/python3.11/site-packages/interpreter/core/respond.py", line 87, in respond
    for chunk in interpreter.llm.run(messages_for_llm):
  File "/opt/miniconda3/envs/open-interpreter/lib/python3.11/site-packages/interpreter/core/llm/llm.py", line 324, in run
    yield from run_text_llm(self, params)
  File "/opt/miniconda3/envs/open-interpreter/lib/python3.11/site-packages/interpreter/core/llm/run_text_llm.py", line 20, in
run_text_llm
    for chunk in llm.completions(**params):
  File "/opt/miniconda3/envs/open-interpreter/lib/python3.11/site-packages/interpreter/core/llm/llm.py", line 466, in
fixed_litellm_completions
    raise first_error  # If all attempts fail, raise the first error
    ^^^^^^^^^^^^^^^^^
  File "/opt/miniconda3/envs/open-interpreter/lib/python3.11/site-packages/interpreter/core/llm/llm.py", line 443, in
fixed_litellm_completions
    yield from litellm.completion(**params)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/miniconda3/envs/open-interpreter/lib/python3.11/site-packages/litellm/utils.py", line 983, in wrapper
    raise e
  File "/opt/miniconda3/envs/open-interpreter/lib/python3.11/site-packages/litellm/utils.py", line 864, in wrapper
    result = original_function(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/miniconda3/envs/open-interpreter/lib/python3.11/site-packages/litellm/main.py", line 2951, in completion
    raise exception_type(
          ^^^^^^^^^^^^^^^
  File
"/opt/miniconda3/envs/open-interpreter/lib/python3.11/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py",
line 2141, in exception_type
    raise e
  File
"/opt/miniconda3/envs/open-interpreter/lib/python3.11/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py",
line 1888, in exception_type
    raise BadRequestError(
litellm.exceptions.BadRequestError: litellm.BadRequestError: AzureException BadRequestError - Error code: 400 - {'error':
{'message': "Invalid value: 'computer'. Supported values are: 'system', 'assistant', 'user', 'function', and 'tool'.", 'type':
'invalid_request_error', 'param': 'messages[3].role', 'code': 'invalid_value'}}
(open-interpreter) [Users/alexajo]$

Expected behavior

Returning to the prompt so I can continue asking questions.

When running the same with --local, and llama3.1:8b through Ollama, once the plot has been generated, after a short period of time this is written in the terminal

   The graph shows the distribution of the sum of two dice rolled 1000 times.
             Each bar represents the frequency of a particular sum.

  This is what you wanted to see: a visual representation of the probability
  distribution of rolling two dice.

  We are done here, as we have plotted the desired distribution.

  (No further code needs to be run)

>

I can now continue using Open interpreter. This is expected behavior, I think.

Screenshots

oi_image

Open Interpreter version

0.4.3

Python version

3.11.11

Operating System name and version

Mac OS Sequoia 15.2

Additional context

No response

lexnederbragt avatar Dec 16 '24 10:12 lexnederbragt