pydantic-ai
pydantic-ai copied to clipboard
Mistral Support
This pull request introduces integration with the Mistral model for Pydantic AI.
The MistralModel class leverages the Mistral Python client to interact with the Mistral API, enabling both streaming and non-streaming requests and Structured Responses.
This integration supports various modes of operation, including function calling, JSON mode, and stream mode, based on the presence of function and result tools.
Note: Mistral does not support Streaming on Function Calling or Structured Responses. Even when using stream_async, the behavior is not truly streaming. After discussions on the Mistral Discord, I use json_mode to stream only the structured output.
Test: >> Working on it. <<
- [ ] Tests
@YanSte BaseModel isn't yet supported by partial validation, this is explained in data at https://docs.pydantic.dev/latest/concepts/experimental/#typeadapter-only.
Use a TypedDict and it should work.
@samuelcolvin
orted by partial validation, this
Yes, now I remember, I overlooked this part. To ensure some dev's don't miss it again, maybe add an update to documentation to include this information?
Thanks @samuelcolvin for your feedback.
I will finish the tests (I'm still working on it).
Improving the streaming json mode.
And ping you back for the second review.
I've got this merged with main, just let me merge it with your latest pushes
You should definitely check the changes I made, in particular commenting out # created=1704067200, # 2024-01-01 in the tests.models.test_mistral.chunk function. I dropped it to get the tests passing but if you want to confirm the timestamp setting is correct you might want to tweak that.
But I figured I would probably be faster to set up the model settings stuff and deal with the refactor to the new unified ModelResponse type, and that you'd appreciate the help. If you don't, feel free to revert my push!
Thanks so much!