pydantic-ai
pydantic-ai copied to clipboard
Anthropic support
So far we have OpenAI and Gemini, adding Anthropic seems a pretty obvious next step.
I don't think there are any other models we need to add to the core library immediately?
Perplexity AI could be an option?
@adityaraute if you want another model or provider, please create another issue.
I'll try to add it! Thats ok?
@samuelcolvin i dont have permission to push a new branch. Can you give me permission to create new branches?
@igor-silveira you have permission to create a fork and create a pull request from there - that's how GitHub works.
I'd be happy to pick this up, hoping to have a PR up by EOD :)
Think Mistral AI or Bedrock would be nice if you want to handle EU needs
We have a separate issue about bedrock: #118.
Mistral models are already supported by groq, and will be supported by #112.
If you need something more, please create a new issue.
I'd be happy to pick this up, hoping to have a PR up by EOD :)
Is there a link to a PR for this you could share, perchance? No pressure to finish it if you're onto other things, but I'm super curious to give it a whirl!
@dazzaji,
Ah yeah, got tied up with some other things late last week. I'll prioritize this tomorrow. I can send a link then.
That being said, if you're really eager, feel free to start up, I'm happy to collaborate!
Can’t wait for you guys to finish this :) Function calling only works great with Anthropic right now.
@dazzaji,
I'm getting rolling on this now :).
what about support for the Openai from azure ?
Is this support ready? When trying to adopt the Anthropic shim, it looks like the anthropic model files aren’t included in the latest 0.0.12 release. I couldn’t tell if there was a bug in the docs, a bug in the packaging or a user error in my part?
I'm getting the same issue that @mdelder raised.
Same issue here as @mdelder. It seems documentation and code are not aligned?
We haven't made a release yet, will do tomorrow.
You can always install from main if you're impatient.
I ended up building from main and it worked for me.
Thanks!
it made a release last night.
The release now has the Anthropic models. Thanks so much!
Thanks @sydney-runkle for anthropic support, regarding stream support you mentioned might come after an internal streaming refactor Is there more info on what is needed?
I'd also love to find out what the ETA is on streaming implemention for Claude
We'll get this across the line by early next week :)
Awesome looking forward to it!
@sydney-runkle If you'd like someone to test it out I'm excellent at accidentally breaking every piece of code presented to me and happy to share my rare talent.
Marking as done, we've released anthropic streaming in v0.0.20!
@sydney-runkle is there any intention to support the Anthropic models on Google Vertex?
@sydney-runkle I just tried Anthropic streaming (pydantic-ai v0.0.20) and the stream comes in as one big chunk, not as an actual stream (although there is no longer an error about anthropic streaming not being supported). I can confirm the code is correct since a simple switch to "openai:gpt-4o" model creates an actual stream.
...
async with agent.run_stream(
user_prompt=prompt,
deps=agent_deps,
message_history=message_history
) as result:
# Stream the model's response
async for chunk in result.stream():
print("\nStreaming chunk:", chunk)
yield f"event: document_messages\ndata: {json.dumps(chunk)}\n\n"
...
I discovered that this only happens if I specify the result_type (streaming works fine without):
class DocumentModelResponse(TypedDict, total=False):
model_response: str
document_md: str
agent = Agent(
agent_data["model"],
deps_type=AgentDeps,
result_type=DocumentModelResponse,
tools=tools,
system_prompt=system_prompt
)