[Bedrock] ValidationException: "Thinking may not be enabled when tool_choice forces tool use" with interleaved thinking on Sonnet 4.5
Initial Checks
- [x] I confirm that I'm using the latest version of Pydantic AI
- [x] I confirm that I searched for my issue in https://github.com/pydantic/pydantic-ai/issues before opening this issue
Description
Bug Description
When using AWS Bedrock Claude Sonnet 4.5 with interleaved thinking enabled, the agent fails when attempting to use tools after the initial call. This appears to be a regression introduced between versions 1.0.1 and 1.0.15.
Error Message
botocore.errorfactory.ValidationException: An error occurred (ValidationException) when calling the ConverseStream operation: The model returned the following errors: Thinking may not be enabled when tool_choice forces tool use.
Configuration
PydanticAI Version: 1.0.15
Model: Claude Sonnet 4.5 via AWS Bedrock
Provider: bedrock
Model Settings:
{
"max_tokens": 63000,
"temperature": 1.0,
"timeout": 300,
"parallel_tool_calls": true,
"bedrock_additional_model_requests_fields": {
"anthropic_beta": [
"token-efficient-tools-2025-02-19",
"interleaved-thinking-2025-05-14"
],
"thinking": {
"type": "enabled",
"budget_tokens": 4096
}
}
}
Steps to Reproduce
- Configure agent with AWS Bedrock Sonnet 4.5 and thinking enabled (settings above)
- Call agent with structured output:
agent.iter(prompt, output_type=MyOutputType) - Agent responds to first call with the error
Stack Trace
File ".venv/lib/python3.12/site-packages/pydantic_ai/models/instrumented.py", line 381, in request_stream
async with self.wrapped.request_stream(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/lib/python3.12/contextlib.py", line 210, in __aenter__
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.12/site-packages/pydantic_ai/models/bedrock.py", line 289, in request_stream
response = await self._messages_create(messages, True, settings, model_request_parameters)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.12/site-packages/pydantic_ai/models/bedrock.py", line 412, in _messages_create
model_response = await anyio.to_thread.run_sync(functools.partial(self.client.converse_stream, **params))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.12/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 2470, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File ".venv/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 967, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.12/site-packages/botocore/client.py", line 602, in _api_call
return self._make_api_call(operation_name, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.12/site-packages/botocore/context.py", line 123, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.12/site-packages/botocore/client.py", line 1078, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.errorfactory.ValidationException: An error occurred (ValidationException) when calling the ConverseStream operation: The model returned the following errors: Thinking may not be enabled when tool_choice forces tool use.
Expected Behavior
The agent should successfully use tools even when thinking is enabled, as it did in version 1.0.1.
Additional Context
- Regression: This worked correctly in PydanticAI version 1.0.1
- Consistent: Error occurs on every attempt with these settings
- Timing: Fails specifically when tools are required after the initial response
The error suggests that when PydanticAI forces tool usage (likely due to output_type requiring structured output or tool selection), it conflicts with Bedrock's thinking feature. This may be related to how tool_choice is set in the Bedrock API call when thinking is enabled.
Example Code
import asyncio
from pydantic import BaseModel
from pydantic_ai import Agent
from pydantic_ai.models.anthropic import AnthropicModelSettings
from pydantic_ai.models.bedrock import BedrockConverseModel
# Define a simple tool
def get_current_time() -> str:
"""Get the current time."""
from datetime import datetime
return datetime.now().isoformat()
# Define a simple structured output type for this example
class SimpleOutput(BaseModel):
"""Simple structured output."""
summary: str
timestamp: str
# Model settings that trigger the bug
model_settings = {
"max_tokens": 63000,
"temperature": 1.0,
"timeout": 300,
"parallel_tool_calls": True,
"bedrock_additional_model_requests_fields": {
"anthropic_beta": [
"token-efficient-tools-2025-02-19",
"interleaved-thinking-2025-05-14"
],
"thinking": {
"type": "enabled",
"budget_tokens": 4096
}
}
}
# Create the Bedrock model
model = BedrockConverseModel(
model_name="us.anthropic.claude-sonnet-4-5-20250929-v1:0",
settings=AnthropicModelSettings(**model_settings)
)
# Create agent with tool
agent = Agent(
model=model,
tools=[get_current_time],
system_prompt="You are a helpful assistant. Use the get_current_time tool to provide accurate timestamps."
)
async def main():
"""Run the agent and trigger the error."""
# This prompt should trigger tool usage
prompt = "What is the current time?"
nodes = []
# Using iter() with output_type triggers the bug when tools are needed
async with agent.iter(prompt, output_type=SimpleOutput) as agent_run:
async for node in agent_run:
nodes.append(node)
result = agent_run.result.output
print("\n\nResult:", result)
if __name__ == "__main__":
asyncio.run(main())
Python, Pydantic AI & LLM client version
- **Python:** 3.12.7
- **Pydantic:** 2.11.7
- **PydanticAI:** 1.0.15
- **boto3:** 1.40.38
- **botocore:** 1.40.38
- **anthropic:** 0.69.0
@sharon8811 Thanks for the report. This is caused by https://github.com/pydantic/pydantic-ai/pull/2819, which fixed a bug (we were never sending tool_choice=required to Anthropic even when it was supported) but as you point out, introduces a new issue (we're now sending tool_choice=required to Anthropic in some cases where it's not supported).
In AnthropicModel, we "solve" this with a UserError and we could do the same in BedrockModel:
https://github.com/pydantic/pydantic-ai/blob/9b1913ef16f75b3575018a727f36ca25b1470372/pydantic_ai_slim/pydantic_ai/models/anthropic.py#L276-L279
But I'm thinking now that we could also drop tool_choice=required/any in this scenario, even though we wouldn't be forcing the model to call the output tool anymore. If the model answers with text instead of calling the final_result tool, Pydantic AI will automatically send back a Please include your response in a tool call. error message, which should be sufficient to get it to call the tool, and would likely perform better than PromptedOutput. This'd be similar to https://github.com/pydantic/pydantic-ai/issues/2793.
I'm also trying to use Anthropic models on Bedrock with thinking and structured outputs. I need a method to force a tool_choice=auto to get this to work. On one hand it would be nice if PydanticAI could detect thinking and map the correct value for the given model on bedrock, but of course it's hard to keep up with new models coming out each day. So a prefered option would be some way setting an explicit value in Model settings.
I'm on pydantic ai 1.7.0 at the moment.
Note on documentation
The AWS documentation on tool_choice and extended thinking seems out of date.
Tool choice limitation: Tool use with thinking only supports tool_choice: any. It does not support providing a specific tool, auto, or any other values.
This appears to be incorrect or perhaps out of date. In practice For Sonnet 4, 4.5 and Haiku 4.5, tool_choice: {"auto": {}} is the only option that works when thinking is enabled.
Request
Is there anyway we could have a manual override setting to force tool_choice on bedrock?
Work around
# Claude thinking enabled
settings = BedrockModelSettings(
...,
bedrock_additional_model_requests_fields={
"thinking": {
"type": "enabled",
"budget_tokens": 1024
}
)
...
def is_thinking_enabled(setting: BedrockModelSettings) -> bool:
if settings.bedrock_additional_model_requests_fields:
thinking = settings.bedrock_additional_model_requests_fields.get("thinking") or {}
thinking_type = thinking.get("type") or ""
return thinking_type.lower() == "enabled"
return False
class CustomConverseModel(BedrockConverseModel):
def _map_tool_config(self, model_request_parameters: ModelRequestParameters) -> dict | None:
tool_config = super()._map_tool_config(model_request_parameters)
if tool_config is None:
return None
if tool_config.get("toolChoice"):
if is_thinking_enabled():
# Force tool choice to 'auto' when thinking is enabled.
tool_config["toolChoice"] = {"auto": {}}
return tool_config
@gmetzker-4c I think the right solution will be what I described in the last paragraph in https://github.com/pydantic/pydantic-ai/issues/3092#issuecomment-3376409540, which is to not set tool_choice=any if we know it's not supported, which we can easily encode for anthropic + thinking with a new field on BedrockModelProfile.
In the mean time, you can disable tool_choice entirely using the existing BedrockModelProfile.bedrock_supports_tool_choice setting:
from pydantic_ai.models.bedrock import BedrockConverseModel, BedrockModelSettings
from pydantic_ai.providers.bedrock import BedrockModelProfile, BedrockProvider
model_name = "..."
provider = BedrockProvider(api_key="...")
profile = BedrockModelProfile.from_profile(provider.model_profile(model_name)).update(bedrock_supports_tool_choice=False)
settings = BedrockModelSettings(
bedrock_additional_model_requests_fields={
"thinking": {"type": "enabled", "budget_tokens": 1024}
}
)
model = BedrockConverseModel(
model_name, provider=provider, settings=settings, profile=profile
)
I think the right solution will be what I described in the last paragraph in https://github.com/pydantic/pydantic-ai/issues/3092#issuecomment-3376409540, which is to not set tool_choice=any if we know it's not supported, which we can easily encode for anthropic + thinking with a new field on BedrockModelProfile.
In the mean time, you can disable tool_choice entirely using the existing BedrockModelProfile.bedrock_supports_tool_choice setting:
If we set bedrock_supports_tool_choice=False looks like it drops toolChoice from the tool config. In my experiments toolChoice: {"auto": {}} works with Bedrock->Claude+thinking.
Looking over the anthropic docs it looks like auto is the default anyways so, I suppose it works the same with your suggestion.
Does https://github.com/pydantic/pydantic-ai/pull/3611 partially address this?
@dsfaccini I wouldn't expect it to, unless we added a check like https://github.com/pydantic/pydantic-ai/issues/3092#issuecomment-3376409540 to BedrockConverseModel in that PR, which would be out of scope 😄