adk-python
adk-python copied to clipboard
Fix: Prevent double JSON serialization of MCP tool responses
Fixes #3676
MCP tool responses arrive as JSON strings but were being double-serialized by _safe_json_serialize(), creating triple-nested JSON that prevented Claude and GPT from parsing tool results.
Example of the bug: '{"content": [{"type": "text", "text": "{\n "type"..."}]}'
This fix adds an isinstance(str) check before serialization. If the response is already a string (from MCP or other sources), it's used directly. Otherwise, it's serialized normally.
Impact: Without this fix, agents using LiteLLM with MCP tools would successfully call tools but fail to present results to users, appearing to hang or produce incomplete responses.
Tested with Claude Sonnet 4.5 and GPT-5 via Azure OpenAI with MCP tools (Google Drive, HubSpot CRM) in a production multi-agent system.
Link to Issue or Description of Change
1. Link to an existing issue:
- Closes: #3676
Testing Plan
Problem:
MCP tool responses come as JSON strings. The code called _safe_json_serialize() on these already-serialized strings, causing triple-nested JSON like: '{"content": [{"type": "text", "text": "{\\n \\"type\\"..."}]}'. This prevented Claude/GPT from parsing the tool results.
Solution:
Added isinstance(str) check before serialization in _content_to_message_param() (line 369).
If the response is already a string, use it directly. Otherwise, serialize normally.
Unit Tests:
- [ ] I have added or updated unit tests for my change.
- [x] All unit tests pass locally.
Manual End-to-End (E2E) Tests:
Setup:
- Multi-agent system with ADK 1.19.0 + LiteLLM wrapper
- Claude Sonnet 4.5 via Vertex AI (
vertex_ai/claude-sonnet-4-5@20250929) - GPT-5 via Azure OpenAI (
azure/gpt-5-openai-latest) - MCP tools: Google Drive agent, HubSpot CRM agent
- Gluon Link (quanutmzero) secure MCP gateway for intent-based governance.
Test Cases:
- Google Drive: List files, search queries
- HubSpot CRM: Company listing
Before Fix:
- Log line 3355: Triple-nested JSON
'{"content": [{"type": "text", "text": "{\\n..."}]}' - Tools executed successfully, but results were never displayed to the user
- Agents appeared to hang after tool calls
After Fix:
- Clean single-level JSON sent to LiteLLM
- Tool results are properly parsed and displayed
- Complete formatted responses (markdown tables) rendered correctly
- Both Claude and GPT-5 successfully present tool outputs
Checklist
- [x] I have read the CONTRIBUTING.md document.
- [x] I have performed a self-review of my own code.
- [x] I have commented my code, particularly in hard-to-understand areas.
- [ ] I have added tests that prove my fix is effective or that my feature works.
- [x] New and existing unit tests pass locally with my changes.
- [x] I have manually tested my changes end-to-end.
- [x] Any dependent changes have been merged and published in downstream modules.
Additional context
This fix is critical for production systems using MCP tools with LiteLLM models. The bug affects any pre-serialized JSON responses, not just MCP. The fix maintains backward compatibility with non-string responses while properly handling already-serialized strings.
Summary of Changes
Hello @thesynapses, I'm Gemini Code Assist[^1]! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request addresses a critical bug where tool responses, especially from MCP tools, were being double-serialized into triple-nested JSON structures. This prevented large language models from correctly parsing the tool results, leading to agents appearing unresponsive or failing to display output. The fix introduces a conditional check to ensure that only non-string responses are serialized, thereby guaranteeing proper parsing and display of tool outputs for a smoother agent experience.
Highlights
- Prevented double JSON serialization: Introduced a check to prevent
_safe_json_serialize()from double-serializing already-stringified JSON responses, which previously led to triple-nested JSON. - Improved LLM tool parsing: Ensures that large language models like Claude and GPT can correctly parse and display results from tool calls, particularly those from MCP tools, resolving issues where agents appeared to hang.
- Conditional serialization logic: Implemented an
isinstance(str)check forpart.function_response.response; if it's already a string, it's used directly, otherwise it undergoes normal serialization.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in pull request comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with :thumbsup: and :thumbsdown: on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
[^1]: Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.
Response from ADK Triaging Agent
Hello @thesynapses, thank you for creating this PR!
This is a great contribution. Could you please add unit tests for this change? This will help to ensure the quality of the code and prevent regressions.
This information will help reviewers to review your PR more efficiently. Thanks!
Hi @thesynapses Thank you for your contribution! We appreciate you taking the time to submit this pull request. Your PR has been received by the team and is currently under review. We will provide feedback as soon as we have an update to share.
Hi @wyf7107 , Can you please review this.