outlines icon indicating copy to clipboard operation
outlines copied to clipboard

OpenAI BadRequestError

Open rcourivaud opened this issue 1 year ago • 2 comments

Describe the issue as clearly as possible:

When I call openai.BadRequestError: Error code: 400 - {'error': {'message': "Invalid schema for response_format 'default': In context=('properties', 'description'), 'default' is not permitted.", 'type': 'invalid_request_error', 'param': 'response_format', 'code': None}}

Steps/code to reproduce the bug:

def get_equipment_model():
    model = models.openai(
        "gpt-4o-mini",
        api_key=Config.OPENAI_API_KEY
    )
    return model

class EquipmentDetails(BaseModel):
    type: str = Field(description="Le type de l'équipement")
    sub_type: str = Field(description="Le sous-type de l'équipement")
    energy_usage: List[EquipmentEnergyUsage] = Field(description="Les usages énergétiques de l'équipement")
    specific_units: List[SpecificUnitsBase] = Field(description="Les unités spécifiques de l'équipement")




def build_details_equipment_from_string(text: str, base_equipment: EquipmentBase):
    model = get_equipment_model()
    generator = outlines.generate.json(model, EquipmentDetails)
    extraction_prompt = precision(equipment_description=text)
    detailed_equipments = generator(extraction_prompt)
    return detailed_equipments

Expected result:

web-1  |   File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 943, in run
web-1  |     result = context.run(func, *args)
web-1  |   File "/app/app/routers/equipment.py", line 20, in text_to_equipment
web-1  |     return create_equipment_from_text(equipment_text.text)
web-1  |   File "/app/app/services/equipment.py", line 9, in create_equipment_from_text
web-1  |     equipment = build_equipment_from_string(text)
web-1  |   File "/app/app/dependencies/outlines/services/equipments.py", line 28, in build_equipment_from_string
web-1  |     base_equipment = build_base_equipment_from_string(text)
web-1  |   File "/app/app/dependencies/outlines/services/equipments.py", line 14, in build_base_equipment_from_string
web-1  |     base_equipment = generator(extraction_prompt)  # Await added here
web-1  |   File "/usr/local/lib/python3.10/site-packages/outlines/models/openai.py", line 145, in __call__
web-1  |     response, prompt_tokens, completion_tokens = generate_chat(
web-1  |   File "/usr/local/lib/python3.10/site-packages/outlines/base.py", line 61, in __call__
web-1  |     return self.call_with_signature(*args, **kwargs)
web-1  |   File "/usr/local/lib/python3.10/site-packages/outlines/base.py", line 166, in call_with_signature
web-1  |     outputs = self.vectorize_call_coroutine(broadcast_shape, args, kwargs)
web-1  |   File "/usr/local/lib/python3.10/site-packages/outlines/base.py", line 255, in vectorize_call_coroutine
web-1  |     outputs = loop.run_until_complete(create_and_gather_tasks())
web-1  |   File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
web-1  |   File "/usr/local/lib/python3.10/site-packages/outlines/base.py", line 249, in create_and_gather_tasks
web-1  |     outputs = await asyncio.gather(*tasks)
web-1  |   File "/usr/local/lib/python3.10/site-packages/outlines/models/openai.py", line 212, in generate_chat
web-1  |     responses = await call_api(prompt, system_prompt, config)
web-1  |   File "/usr/local/lib/python3.10/site-packages/outlines/caching.py", line 107, in wrapper
web-1  |     result = await cached_function(*args, **kwargs)
web-1  |   File "/usr/local/lib/python3.10/site-packages/outlines/models/openai.py", line 201, in call_api
web-1  |     responses = await client.chat.completions.create(
web-1  |   File "/usr/local/lib/python3.10/site-packages/openai/resources/chat/completions.py", line 1661, in create
web-1  |     return await self._post(
web-1  |   File "/usr/local/lib/python3.10/site-packages/openai/_base_client.py", line 1838, in post
web-1  |     return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
web-1  |   File "/usr/local/lib/python3.10/site-packages/openai/_base_client.py", line 1532, in request
web-1  |     return await self._request(
web-1  |   File "/usr/local/lib/python3.10/site-packages/openai/_base_client.py", line 1633, in _request
web-1  |     raise self._make_status_error_from_response(err.response) from None
web-1  | openai.BadRequestError: Error code: 400 - {'error': {'message': "Invalid schema for response_format 'default': In context=('properties', 'description'), 'default' is not permitted.", 'type': 'invalid_request_error', 'param': 'response_format', 'code': None}}

Error message:

No response

Outlines/Python version information:

Version information

``` outlines==0.1.1 fastapi==0.108.0 outlines==0.1.1 ```

Context for the issue:

No response

rcourivaud avatar Nov 07 '24 09:11 rcourivaud

I think the issue here is that default values are not allowed for response_format in openAI

so define your pydantic models as such:

class EquipmentDetails(BaseModel):
    type: str
    sub_type: str
    energy_usage: List[EquipmentEnergyUsage]
    specific_units: List[SpecificUnitsBase]

if you still encounter an error try adding this to your pydantic models as well: according to outlines docs

class EquipmentDetails(BaseModel):
    model_config = ConfigDict(extra="forbid") # required by openai 
    type: str
    sub_type: str
    energy_usage: List[EquipmentEnergyUsage]
    specific_units: List[SpecificUnitsBase]

hope that solves your issue

aymanHS-code avatar Nov 12 '24 10:11 aymanHS-code

OpenAI has really bad docs on this. In the docs they do outline that defaults aren't supported, but in case you don't want to change all of the pydantic models you already created you can just use this library, which will convert all of the models into openai safe models (I'm partial to it): https://github.com/reedgraff/schemic

👍

ReedGraff avatar Apr 29 '25 18:04 ReedGraff