Add support for Anthropic Claude models
This PR adds support for Anthropic Claude models as an alternative LLM provider in the ADK. The implementation supports both the direct Anthropic API and Anthropic models via Google Cloud Vertex AI.
Closes https://github.com/google/adk-go/issues/225
Features
- New
model/anthropicpackage implementing themodel.LLMinterface - Support for all current Claude models (Opus 4.x, Sonnet 4.x, Haiku 4.5, etc.)
- Streaming and non-streaming response handling
- Tool/function calling with proper schema conversion
- Extended thinking support (mapped to
genai.PartwithThought=true) - Multimodal inputs (text, images, PDF documents)
- System instructions support
- Citations and web search tool handling
Vertex AI Integration
Models can be used via Vertex AI by setting Variant: anthropic.VariantVertexAI or the ANTHROPIC_USE_VERTEX=1 environment variable.
Testing Plan
- Added comprehensive unit tests
- Added example applications demonstrating various workflow patterns:
- Basic usage (
examples/anthropic/) - Sequential, parallel, and loop workflow agents (
examples/workflowagents/*-anthropic/)
- Basic usage (
Manual E2E Testing
godotenv -f .env go run ./examples/anthropic/
godotenv -f .env go run ./examples/workflowagents/sequential-anthropic/
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).
View this failed invocation of the CLA check for more information.
For the most up to date status, view the checks section at the bottom of the pull request.
Summary of Changes
Hello @ivanvanderbyl, I'm Gemini Code Assist[^1]! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request significantly expands the capabilities of the ADK by integrating Anthropic Claude models. It provides developers with a powerful new option for building agents, supporting a wide array of advanced features such as multimodal input, sophisticated tool interactions, and flexible deployment through either Anthropic's native API or Google Cloud's Vertex AI. This integration aims to enhance the versatility and performance of agents developed using the ADK framework.
Highlights
- Anthropic Claude Integration: Adds comprehensive support for Anthropic Claude models as an alternative LLM provider within the ADK, enabling developers to leverage Claude's capabilities.
- Flexible Deployment Options: Supports both direct Anthropic API access and integration via Google Cloud Vertex AI, allowing configuration through code or environment variables for versatile deployment.
- Rich Feature Set: Includes streaming and non-streaming responses, robust tool/function calling with proper schema conversion, extended thinking support (mapped to
genai.PartwithThought=true), multimodal inputs (text, images, PDFs), system instructions, and handling of citations and web search tools. - New Examples and Tests: Introduces a new
model/anthropicpackage with extensive unit tests and example applications demonstrating basic usage, as well as sequential, parallel, and loop workflow agents powered by Anthropic models.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in pull request comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with :thumbsup: and :thumbsdown: on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
[^1]: Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.
This looks duplicated with PR #233.
@git-hulk Thanks for flagging this. I hadn't seen your PR when I started working on this, so apologies for the overlap.
I've been digging into the tool calling flow in your PR and noticed a few edge cases that might trip things up:
Role constraints: Anthropic requires tool_use blocks in assistant messages and tool_result blocks in user messages. Right now the role comes straight from content.Role, so if upstream has the wrong role you'll hit API validation errors. Might be worth checking for FunctionCall/FunctionResponse parts and overriding the role accordingly.
Message alternation: Related to the above, Anthropic needs strictly alternating user/assistant turns. Consecutive contents with the same role (pretty common after tool calls) need to be merged into a single message.
Tool result content: The stringifyFunctionResponse heuristic that picks result or output fields can drop data unexpectedly. JSON-marshalling the whole Response map would be safer. Also worth validating that FunctionResponse.ID is present since Anthropic needs it to correlate results back to the originating tool_use.
ToolChoice: Minor one: setting ToolChoice to "auto" when tools are present might override user intent if they wanted different behaviour.
@ivanvanderbyl Great thanks for your tests and confirmation.
Role constraints: Anthropic requires tool_use blocks in assistant messages and tool_result blocks in user messages. Right now the role comes straight from content.Role, so if upstream has the wrong role you'll hit API validation errors. Might be worth checking for FunctionCall/FunctionResponse parts and overriding the role accordingly.
I think this would be nice to have, since we cannot assume the upstream returns a wrong role.
Tool result content: The stringifyFunctionResponse heuristic that picks result or output fields can drop data unexpectedly. JSON-marshalling the whole Response map would be safer. Also worth validating that FunctionResponse.ID is present since Anthropic needs it to correlate results back to the originating tool_use.
Good suggestion, will imrpove it later.
Message alternation: Related to the above, Anthropic needs strictly alternating user/assistant turns. Consecutive contents with the same role (pretty common after tool calls) need to be merged into a single message.
ToolChoice: Minor one: setting ToolChoice to "auto" when tools are present might override user intent if they wanted different behaviour.
For those two points, I just kept the same behaviors with adk-python. Might have a check when this will happen.