openai-python icon indicating copy to clipboard operation
openai-python copied to clipboard

None is not of type ‘object’

Open farahats9 opened this issue 1 year ago • 26 comments

Confirm this is an issue with the Python library and not an underlying OpenAI API

  • [X] This is an issue with the Python library

Describe the bug

When calling functions with no input arguments it gives this error

response = openai_client.chat.completions.create(timeout=10,
  File "/usr/local/lib/python3.9/site-packages/openai/_utils/_utils.py", line 299, in wrapper
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/openai/resources/chat/completions.py", line 556, in create
    return self._post(
  File "/usr/local/lib/python3.9/site-packages/openai/_base_client.py", line 1055, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
  File "/usr/local/lib/python3.9/site-packages/openai/_base_client.py", line 834, in request
    return self._request(
  File "/usr/local/lib/python3.9/site-packages/openai/_base_client.py", line 877, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.OpenAIError: Error code: 400 - {'error': {'message': "None is not of type 'object' - 'messages.2.function_call'", 'type': 'invalid_request_error', 'param': None, 'code': None}}

This used to work before the recent updates.

I also posted this bug in the community forum since it is related to the API https://community.openai.com/t/none-is-not-of-type-object/488873

To Reproduce

An example of the function I try to call:

tools = [
  {
      "type": "function",
      "function": {
          "name":  "escalate_to_manager",
          "description": "Contact the manager when there is nothing else that can be done",
          "parameters": {
              "type": "object",
              "properties": {},
              "required": [],
          },
      }
  }
]

And I am calling the chat completion normally:

response = client.chat.completions.create(timeout=10,
                                         model="gpt-3.5-turbo-1106",
                                         messages=messages,
                                         tools=tools,
                                         tool_choice="auto",
                                         temperature=0,
                                         )

Code snippets

No response

OS

debian bullseye

Python version

python 3.9

Library version

openai v1.1.1

farahats9 avatar Nov 10 '23 17:11 farahats9

Can you share the messages you're using?

RobertCraigie avatar Nov 10 '23 17:11 RobertCraigie

I think you just have to do pip install --upgrade pydantic

pydantic==1.10.12 may be the problem

it worked for me

For more info you can check this issue

milioe avatar Nov 10 '23 23:11 milioe

@milioe I checked the library versions and I am not using 1.10.12 pydantic==2.4.2 pydantic_core==2.10.1

Also I noticed the error happens on other functions with input parameters as well so not sure what is causing this.

farahats9 avatar Nov 11 '23 02:11 farahats9

This should be fixed in https://github.com/openai/openai-python/releases/tag/v1.2.3 which was released a few hours after this issue was opened. Can you try upgrading and let me know if the problems still persist?

rattrayalex avatar Nov 11 '23 02:11 rattrayalex

Problem is still happening with the latest version openai==1.2.3

farahats9 avatar Nov 11 '23 13:11 farahats9

Can you share a full snippet? You've only shared the tools you're passing but the error message indicates the issue is with messages.

RobertCraigie avatar Nov 11 '23 13:11 RobertCraigie

I have the same issue with openai 1.2.3. Error message is:

Error code: 400 - {'error': {'message': "None is not of type 'object' - 'messages.2.function_call'", 'type': 'invalid_request_error', 'param': None, 'code': None}}

Message logs are like this:

[{"role":"system","content":"You are  [REDACTED]"},{"role":"user","content":"[REDACTED]"},{"content":null,"function_call":null,"role":"assistant","tool_calls":[{"index":0,"id":"[REDACTED]","function":{"arguments":"{\"query\": \"[REDACTED]\"}","name":"[REDACTED]"},"type":"function"},{"index":1,"id":"[REDACTED]","function":{"arguments":"{}","name":"[REDACTED]"},"type":"function"},{"index":2,"id":"[REDACTED]","function":{"arguments":"{}","name":"[REDACTED]"},"type":"function"}]},{"tool_call_id":"[REDACTED]","role":"tool","name":"[REDACTED]","content":"[REDACTED]"},{"tool_call_id":"[REDACTED]","role":"tool","name":"[REDACTED]","content":"[REDACTED]"},{"tool_call_id":"[REDACTED]","role":"tool","name":"[REDACTED]","content":"[REDACTED]"}]

Sorry, I had to redact confidential information.

aemr3 avatar Nov 11 '23 15:11 aemr3

I think the issue is related with "function_call" being null (or None in this case), I have removed it from incoming message of openai and the problem is gone.

aemr3 avatar Nov 11 '23 15:11 aemr3

@aemr3 ok will try this and see if it works

farahats9 avatar Nov 11 '23 15:11 farahats9

Tell the python

On Sat, 11 Nov 2023, 19:23 Emre @.***> wrote:

I think the issue is related with "function_call" being null, I have removed it from incoming message from openai and the problem is gone.

— Reply to this email directly, view it on GitHub https://github.com/openai/openai-python/issues/777#issuecomment-1806843319, or unsubscribe https://github.com/notifications/unsubscribe-auth/A6AGSLGVBOQ3P2QBYDV4BCDYD6KAZAVCNFSM6AAAAAA7GPBSM6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMBWHA2DGMZRHE . You are receiving this because you are subscribed to this thread.Message ID: @.***>

Alisultani1 avatar Nov 11 '23 15:11 Alisultani1

Check

On Sat, 11 Nov 2023, 19:26 farahats9 @.***> wrote:

@aemr3 https://github.com/aemr3 ok will try this and see if it works

— Reply to this email directly, view it on GitHub https://github.com/openai/openai-python/issues/777#issuecomment-1806843832, or unsubscribe https://github.com/notifications/unsubscribe-auth/A6AGSLDY3WHQAQQYOABC4MDYD6KIRAVCNFSM6AAAAAA7GPBSM6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMBWHA2DGOBTGI . You are receiving this because you are subscribed to this thread.Message ID: @.***>

Alisultani1 avatar Nov 11 '23 15:11 Alisultani1

problem stopped after I did @aemr3 's advice, thank you

farahats9 avatar Nov 12 '23 14:11 farahats9

I am still having the same issue with version 1.7.2, I had to implement @aemr3's workaround. can we reopen this ? @rattrayalex

zhangineer avatar Jan 15 '24 17:01 zhangineer

Sure. @nknj , could the backend be changed to not throw an error when eg messages.2.function_call is null?

rattrayalex avatar Jan 16 '24 17:01 rattrayalex

Yes, facing the same issue, please could the error be resolved from backend?

VALilly avatar Jan 17 '24 10:01 VALilly

I think the following code should work, it seems like the pydantic model is not quite right, or the backend is being a bit over-picky here:

import sys
import openai
from openai import OpenAI

def main():
    client = OpenAI(api_key=<YOUR_API_KEY>)
    completion = client.chat.completions.create(
        model='gpt-4-1106-preview',
        messages = [
            {'role': 'user', 'content': 'Call the hello_world function.'},
        ],
        tools=[
            {
                "type": "function",
                "function": {
                    "name": "hello_world",
                    "description": "Hello, world",
                    "parameters": {
                        "type": "object", "properties": {}, "required": [],
                    },
                }
            }
        ],
    )

    as_dict = completion.model_dump()
    reconstructed = openai.types.chat.chat_completion.ChatCompletion(**as_dict)

    message = reconstructed.choices[0].message
    call_results = [{'tool_call_id': t.id, 'role': 'tool', 'name': 'hello_world', 'content': 'Well hello there!'} for t in message.tool_calls]

    if not call_results:
        return 'No tool calls found. (please try again!)'

    # The following request should fail with a 400 error.
    completion = client.chat.completions.create(
        model='gpt-4-1106-preview',
        messages = [{'role': 'user', 'content': 'Call the hello_world function.'}, message, *call_results]
    )

    print('If we get here, then the problem wasn\'t reproduced.')
          

if __name__ == '__main__':
    sys.exit(main())

stestagg avatar Jan 24 '24 09:01 stestagg

You likely need to pass exclude_unset=True when converting the model into a dictionary.

RobertCraigie avatar Jan 24 '24 09:01 RobertCraigie

Thank you, that solved it!

I know that might be a pydantic thing, but it would be nice if the defaults 'just worked' here.

stestagg avatar Jan 24 '24 10:01 stestagg

Yeah I agree, it's unfortunate that this behaviour is the default. Sadly I don't think it's really feasible for us to change that default as it'll likely just cause compatibility issues.

RobertCraigie avatar Jan 24 '24 10:01 RobertCraigie

Hi all, scanned this thread, just a touch lost because I am not using the python lib. I am using the 12-2023 version of the api which implements tools param. I am getting the exact same error specified in this thread when writing raw protocol code against the rest endpoint. Wondering if someone can perhaps scan my json below and help me pinpoint what is wrong?

Edit: actually i guess my error is slightly different

"error": { "message": "[{'name': 'GetTime', 'description': 'Convert the following text to a time stamp in the format YYYY-MM-DD HH:MM:SS, assume a start date of 01/30/2024 12:33:22', 'parameters': {'properties': {'ti meDisplay': {'type': 'string', 'description': 'Time must be displayed in a time stamp format YYYY-MM-DD 00:00:00 at midnight.'}}, 'type': 'object', 'required': ['timeDisplay']}}] is not of type 'object' - 'too ls.0.function'", "type": "invalid_request_error", "param": null, "code": null }


{
    "tools": [
        {
            "type": "function",
            "function": [
                {
                    "name": "GetTime",
                    "description": "Convert the following text to a time stamp in the format YYYY-MM-DD HH:MM:SS, assume a start date of 01/30/2024 12:24:43",
                    "parameters": {
                        "properties": {
                            "timeDisplay": {
                                "type": "string",
                                "description": "Time must be displayed in a time stamp format YYYY-MM-DD 00:00:00 at midnight."
                            }
                        },
                        "type": "object",
                        "required": [
                            "timeDisplay"
                        ]
                    }
                }
            ]
        }
    ],
    "tool_choice": "GetTime",
    "messages": [
        {
            "content": "You are a bot that returns converted times from sentences, always use 01/30/2024 12:24:43 when answering. Your response must be returned in a time stamp format YYYY-MM-DD 00:00:00 at midnight.",
            "role": "system"
        },
        {
            "content": "Convert the following text to a time stamp in the format YYYY-MM-DD HH:MM:SS, assume a start date of 01/30/2024 12:24:43: book a meeting for 60 min in a few weeks.",
            "role": "user"
        }
    ],
    "max_tokens": 200,
    "temperature": 0.1,
    "frequency_penalty": 0,
    "presence_penalty": 0,
    "top_p": 0.95,
    "stop": null
}

ganlbarone avatar Jan 30 '24 17:01 ganlbarone

@ganlbarone

{'timeDisplay': {'type': 'string', 'description': 'Time must be displayed in a time stamp format YYYY-MM-DD 00:00:00 at midnight.'}}, 'type': 'object', 'required': ['timeDisplay']}}] is not of type 'object' - 'too ls.0.function'",

Just a guess without knowing the tool api: there you say timeDisplay is 'type': 'string' and then the error says 'timeDisplay' is not type 'object'. But I don't know really.

Maybe using the python client would be easier to get those things right.

antont avatar Jan 30 '24 19:01 antont

@ganlbarone

{'timeDisplay': {'type': 'string', 'description': 'Time must be displayed in a time stamp format YYYY-MM-DD 00:00:00 at midnight.'}}, 'type': 'object', 'required': ['timeDisplay']}}] is not of type 'object' - 'too ls.0.function'",

Just a guess without knowing the tool api: there you say timeDisplay is 'type': 'string' and then the error says 'timeDisplay' is not type 'object'. But I don't know really.

Maybe using the python client would be easier to get those things right.

I actually ended up figuring out the problem. The function node in the json cant be an array. That was the part that was breaking it. Tools is an array, that can contain multiple funtions, but function itself is singular.

It isnt working exactly how I want/expected it to from a feature standpoint, but that is a me problem to sort out now that the JSON is structured appropriately and the API isnt erroring.

Anyway, just posting this back in case anyone runs into this type of error, their return errors could likely be a bit more robust than they are.

{
    "tools": [
        {
            "type": "function",
            "function": {
                "name": "GetTime",
                "description": "Convert the user prompt to a time stamp in the format YYYY-MM-DD HH:MM:SS, assume a start date of 01/30/2024 14:31:03",
                "parameters": {
                    "properties": {
                        "timeString": {
                            "type": "string",
                            "description": "Convert the user prompt to a time stamp in the format YYYY-MM-DD HH:MM:SS, assume a start date of 01/30/2024 14:31:03"
                        }
                    },
                    "type": "object",
                    "required": [
                        "timeString"
                    ]
                }
            }
        }
    ],
    "tool_choice": {
        "function": {
            "name": "GetTime"
        },
        "type": "function"
    },
    "messages": [
        {
            "content": "Convert the user prompt to a time stamp in the format YYYY-MM-DD HH:MM:SS, assume a start date of 01/30/2024 14:31:03.",
            "role": "system"
        },
        {
            "content": "book a meeting for 60 min in a few weeks",
            "role": "user"
        }
    ],
    "max_tokens": 200,
    "temperature": 0.1,
    "frequency_penalty": 0,
    "presence_penalty": 0,
    "top_p": 0.95,
    "stop": null
}

ganlbarone avatar Jan 30 '24 19:01 ganlbarone

function_call

this is still the current state. when removing function_call from the message that encoded the tools calls before sending them back to the api, it works. if the key is included and None, than the error is completly reproducable. strange thing that openai shows in their example code to pass back the whole message object without changes, doesnt make any sense to me :/

JakobPCoder avatar Apr 23 '24 15:04 JakobPCoder

function_call

this is still the current state. when removing function_call from the message that encoded the tools calls before sending them back to the api, it works. if the key is included and None, than the error is completly reproducable. strange thing that openai shows in their example code to pass back the whole message object without changes, doesnt make any sense to me :/

What do you mean when you say "when removing function_call from the message that encoded the tools calls". When I dump to a dict, it doesn't recognize the tool calls after removing the function_call key.

I'm saving the messages (including the tool calls) to a CosmosDB. It works fine without alterations on a new thread. But when I retrieve the objects and deserialize them, it no longer works even though I'm using the ChatCompletionMessage model.

rm2631 avatar Apr 29 '24 22:04 rm2631

Openai wants us to place the tool call messge and the tool response messages into the messages list. Since function_call has been deprecated for tool_call, the model when deciding to call tools, will produce a message including both a list of tool_calls and a Parameter "function_call" which is always none/null. I manage all my messages my self asl dics in a list and when calling gpt including such a Message that is a dict not openaiMesaage and it still in includes the function_call: None, it will throw an api error that does NOT describe its due this key beeing there. So this error is neither expected nor handled.

On Tue, 30 Apr 2024, 00:43 Rob, @.***> wrote:

function_call

this is still the current state. when removing function_call from the message that encoded the tools calls before sending them back to the api, it works. if the key is included and None, than the error is completly reproducable. strange thing that openai shows in their example code to pass back the whole message object without changes, doesnt make any sense to me :/

What do you mean when you say "when removing function_call from the message that encoded the tools calls". When I dump to a dict, it doesn't recognize the tool calls after removing the function_call key.

I'm saving the messages (including the tool calls) to a CosmosDB. It works fine without alterations on a new thread. But when I retrieve the objects and deserialize them, it no longer works even though I'm using the ChatCompletionMessage model.

— Reply to this email directly, view it on GitHub https://github.com/openai/openai-python/issues/777#issuecomment-2083810878, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKU5QWSOIQAPYR4H36476DLY73EJLAVCNFSM6AAAAAA7GPBSM6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAOBTHAYTAOBXHA . You are receiving this because you commented.Message ID: @.***>

JakobPCoder avatar Apr 30 '24 12:04 JakobPCoder

Openai wants us to place the tool call messge and the tool response messages into the messages list. Since function_call has been deprecated for tool_call, the model when deciding to call tools, will produce a message including both a list of tool_calls and a Parameter "function_call" which is always none/null. I manage all my messages my self asl dics in a list and when calling gpt including such a Message that is a dict not openaiMesaage and it still in includes the function_call: None, it will throw an api error that does NOT describe its due this key beeing there. So this error is neither expected nor handled. On Tue, 30 Apr 2024, 00:43 Rob, @.> wrote: function_call this is still the current state. when removing function_call from the message that encoded the tools calls before sending them back to the api, it works. if the key is included and None, than the error is completly reproducable. strange thing that openai shows in their example code to pass back the whole message object without changes, doesnt make any sense to me :/ What do you mean when you say "when removing function_call from the message that encoded the tools calls". When I dump to a dict, it doesn't recognize the tool calls after removing the function_call key. I'm saving the messages (including the tool calls) to a CosmosDB. It works fine without alterations on a new thread. But when I retrieve the objects and deserialize them, it no longer works even though I'm using the ChatCompletionMessage model. — Reply to this email directly, view it on GitHub <#777 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKU5QWSOIQAPYR4H36476DLY73EJLAVCNFSM6AAAAAA7GPBSM6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAOBTHAYTAOBXHA . You are receiving this because you commented.Message ID: @.>

I get it now. Since I was saving to CosmosDB, I wanted to recreate the message list and, therefore, was converting the tool call to ChatCompletionMessage... just like it's returned from openai.


def run_completion(messages):
    try:
        client.chat.completions.create(
            model=<'YOUR MODEL'>,
            messages=messages,
        )  # get a new response from the model where it can see the function response
        print(f"SUCCESS")
    except Exception as e:
        print(f"FAILED")

attempt_1 = messages.copy()
attempt_2 = messages.copy()
attempt_3 = messages.copy()

tool_call = messages[1].model_dump()

attempt_1[1] = ChatCompletionMessage(**tool_call)
run_completion(attempt_1)  # FAILURE

attempt_2[1] = {
    key: value for key, value in tool_call.items() if key != "function_call"
}
run_completion(attempt_2)  # SUCCESS

attempt_3[1] = ChatCompletionMessage(
    **{key: value for key, value in tool_call.items() if key != "function_call"}
)
run_completion(attempt_3)  # SUCCESS

attempt_1 == attempt_3  # True

The intriguing part comes from the comparison of attempt_1 and attempt_3. Even if they are the same, the result is not.

rm2631 avatar May 02 '24 15:05 rm2631

Sorry, I don't think there's much we can do in the SDK for this. Using message.to_json() may be helpful, as it will strip out unnecessary fields.

Otherwise, I agree it'd be nice for the backend to ignore function_call: null in this scenario, but that ticket should not be tracked in this repo (which is just for the python SDK).

rattrayalex avatar May 13 '24 01:05 rattrayalex