openai-python
openai-python copied to clipboard
DEFER_PYDANTIC_BUILD causes issues
Confirm this is an issue with the Python library and not an underlying OpenAI API
- [X] This is an issue with the Python library
Describe the bug
Release 1.16.1 broke clients by deferring the build if pydantic models. As a result, doing a model_dump()
on an object returned by the library can fail.
https://github.com/openai/openai-python/pull/1292/commits/bc6866eb2335d01532190d0906cad7bf9af28621
While this speeds up build times, it should be considered a breaking change and ideally, avoided.
If the speedup is significant, we would request not using a generic envvar like DEFER_PYDANTIC_BUILD
to revert the change, instead using something like OPENAI_PYDANTIC_DEFER_BUILD
.
We noticed this issue with a streaming tool call message, and are not sure if other deep classes exhibit this behaviour.
To Reproduce
- Create a streaming tool call chat completion message.
- Call
model_dump()
on the object. - Observe stack trace in pydantic serializer code.
Code snippets
No response
OS
macOS, Linux
Python version
Python v3.11.8
Library version
openai v1.16.1
Can you share a full example script to reproduce this? I can't reproduce this.
Yep, let me try and pry it loose from our code and create a standalone repro.
Thanks, even just a full stack trace would be helpful :)
That's easier. I manually elided some file paths to non-library code.
Traceback (most recent call last):
File "/opt/homebrew/Cellar/[email protected]/3.11.8/Frameworks/Python.framework/Versions/3.11/lib/python3.11/wsgiref/handlers.py", line 138, in run
self.finish_response()
File "/Users/.../Library/Caches/pypoetry/virtualenvs/backend--rBupBVn-py3.11/lib/python3.11/site-packages/django/core/servers/basehttp.py", line 173, in finish_response
super().finish_response()
File "/opt/homebrew/Cellar/[email protected]/3.11.8/Frameworks/Python.framework/Versions/3.11/lib/python3.11/wsgiref/handlers.py", line 183, in finish_response
for data in self.result:
File "/.../backend/chat/views.py", line 91, in message_save_wrapper
for message_chunk in generator:
File "/.../backend/llm/handlers/openaifunction.py", line 285, in chat_streaming
for chunk in response:
File "/.../backend/llm/handlers/openai.py", line 73, in _traced_completion_create_streaming
call_params["messages"] = [m.model_dump() for m in params["messages"]]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.../backend/llm/handlers/openai.py", line 73, in <listcomp>
call_params["messages"] = [m.model_dump() for m in params["messages"]]
^^^^^^^^^^^^^^
File "/Users/.../Library/Caches/pypoetry/virtualenvs/backend--rBupBVn-py3.11/lib/python3.11/site-packages/pydantic/main.py", line 314, in model_dump
return self.__pydantic_serializer__.to_python(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: 'MockValSer' object cannot be converted to 'SchemaSerializer'
params["messages"]
is list[ChoiceDeltaToolCall]
or list[ChatCompletionMessageToolCall]
which blows up when model_dump()
is called on it.
We got around this by rewriting our code to not do direct model_dump on the returned object, but that was pretty unintuitive behavior.
Let me know if you'd like me to try and isolate a repro for you.
What version of pydantic
are you using? Looks like this is a pydantic bug that's been reported a couple of times: https://github.com/pydantic/pydantic/issues/7713
In the meantime you could probably fix this by setting PYDANTIC_DEFER_BUILD
to 0
or by calling ChatCompletionChunk.model_rebuild()
.
Ooh quite possible. We're on pydantic 2.6.4
We did fix this by PYDANTIC_DEFER_BUILD
- this is what led to having confidence about where the bug was. Our claim is that this is an unanticipated breaking change (it broke production for a brief moment) in a minor semver update. We then backed it out and fixed it the "right way".
Ultimately, the bug doesn't block us, but it's possible that others will run into the same issue and start to see strange stacks in production.
FWIW, the name PYDANTIC_DEFER_BUILD
is unfortunate. It's very specific to openai lib rather than all pydantic defer_build=True situations.
Sure, it was never intended to be public, it was just planned for internal use. But renaming it makes sense.
Thanks for listening. Selfishly, we are fine with whatever y’all choose to do here including closing as “working as intended”, since we got around this already.
On Wed, Apr 10, 2024 at 4:31 AM Robert Craigie @.***> wrote:
Sure, it was never intended to be public, it was just planned for internal use. But renaming it makes sense.
— Reply to this email directly, view it on GitHub https://github.com/openai/openai-python/issues/1306#issuecomment-2047287126, or unsubscribe https://github.com/notifications/unsubscribe-auth/AACQQVAID4Z7Z5IZJHLBVWDY4UPIZAVCNFSM6AAAAABF63YZJOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANBXGI4DOMJSGY . You are receiving this because you authored the thread.Message ID: @.***>
Hey, for future reference, there's a typo in the issue title and comments. The correct workaround is to set the DEFER_PYDANTIC_BUILD
environment variable to 0
, not PYDANTIC_DEFER_BUILD
.
Hey, for future reference, there's a typo in the issue title and comments. The correct workaround is to set the
DEFER_PYDANTIC_BUILD
environment variable to0
, notPYDANTIC_DEFER_BUILD
.
Oops. Thanks @iwaszko . I edited the title and the original description.
@RobertCraigie LMK if you want to just close this bug if this isn't something y'all want to address. We've resolved the issue on our end.