anthropic api error: `tool_use` ids were found without `tool_result` blocks immediately after
Description
Anthropic messages aren't being sent correctly.
reproduce using
const modelMessages: ModelMessage[] = [
{
role: "user",
content: [{ type: "text", text: "generate 10 items" }]
},
{
role: "assistant",
content: [
{
type: "tool-call",
toolCallId: "tool-example-123",
toolName: "json",
input: {
message: "generate 10 items"
}
},
{
type: "text",
text: "I generated code for 10 items."
}
]
},
{
role: "tool",
content: [
{
type: "tool-result",
toolCallId: "tool-example-123",
toolName: "json",
output: {
type: "json",
value: {
code: "export const code = () => [...]",
packageJson: "{}"
}
}
}
]
},
{
role: "user",
content: [{ type: "text", text: "generate 100 items" }]
}
];
try {
const result = await generateText({
messages: modelMessages,
model: "anthropic:claude-sonnet-4-20250514",
system: "You are a helpful assistant."
});
console.log("done");
} catch (error) {
console.error("tryCatch", error);
}
error message
AI_APICallError: messages.1: `tool_use` ids were found without `tool_result` blocks immediately after: tool-example-123. Each `tool_use` block must have a corresponding `tool_result` block in the next message.
AI SDK Version
- ai: "5.0.37",
- @ai-sdk/anthropic: "2.0.14"
Code of Conduct
- [x] I agree to follow this project's Code of Conduct
@gr2m opened a new issue
@tayyab3245 for eyes
Thanks for the tag. I'm looking into this now to see how it relates to the fix in PR #8474.
Hi @abhi-slash-git, thanks for the excellent bug report and clear reproduction case.
@gr2m, I've completed a deep dive and can confirm the root cause. The message conversion logic wasn't correctly ordering tool_result parts before user text and had some inconsistencies in handling mixed assistant content, which violates Anthropic's strict message sequence requirements.
I have a fix ready in PR #8524
The PR normalizes the message structure to solve this. It also includes comprehensive tests covering your exact scenario, plus several other edge cases to prevent future regressions.
As a side note, I also discovered a separate, more subtle bug where provider-executed tools (like web_search) lose their special typing when sent as role: "tool". To keep this PR focused, I have opened a new, dedicated issue to track that here #8527
Would you like me to review the PR? Happy to help
@gr2m Is this not affecting others? I thought this should be more wide spread.
@gr2m Running into this as well. Currently on @ai-sdk/[email protected] but have also incrementally tried versions back to 2.0.12 prior to related changes with the same result. Is there a suggested version to roll back to, or a suggested workaround until a fix is ready? Appreciate any insight here.
@gr2m Running into this as well. Currently on
@ai-sdk/[email protected]but have also incrementally tried versions back to 2.0.12 prior to related changes with the same result. Is there a suggested version to roll back to, or a suggested workaround until a fix is ready? Appreciate any insight here.
Same scenario, just lost my day to this bug.
Argh this is a blocked for my also! How do we get some traction on this?
+1
@gr2m Running into this as well. Currently on
@ai-sdk/[email protected]but have also incrementally tried versions back to 2.0.12 prior to related changes with the same result. Is there a suggested version to roll back to, or a suggested workaround until a fix is ready? Appreciate any insight here.Same scenario, just lost my day to this bug.
For anyone suffering from this, depending on your use-case the workaround is this:
- https://ai-sdk.dev/providers/openai-compatible-providers switch to this provider
- https://docs.claude.com/en/api/openai-sdk use this compatibility SDK endpoint
This allows for my use-case for better tool calling, of course sacrificing some Claude-specific features.
I tried for a few hours patching this, but to no avail.
very sorry for all the trouble! I'll be looking at this today 🙇🏼
Hey, Looks like the issue has been resolved. It's no longer throwing errors
Can confirm I am still seeing the same error when making multiple tool calls. I'm not fluent in Anthropic's API, but some cursory reading and observations below. Hopefully this helps, and perhaps you're already on top of this. 🙂
In v4, when multiple tool calls were made the converted messages separated and alternated the tool-call and tool-result messages which looked something like this:
v4 converted messages
```json [ { "role": "assistant", "content": [ { "type": "tool-call", "toolCallId": "tool-1", "toolName": "my-tool-1" } ] }, { "role": "tool", "content": [ { "type": "tool-result", "toolCallId": "tool-1", "toolName": "my-tool-1", "result": {} } ] }, { "role": "assistant", "content": [ { "type": "tool-call", "toolCallId": "tool-2", "toolName": "my-tool-2" } ] }, { "role": "tool", "content": [ { "type": "tool-result", "toolCallId": "tool-2", "toolName": "my-tool-2", "result": {} } ] } ] ```In v5, the call parts and result parts are combined into messages like this, which I believe is what Anthropic's API is unhappy with.
v5 converted messages
```json [ { "role": "assistant", "content": [ { "type": "tool-call", "toolCallId": "tool-1", "toolName": "my-tool-1" }, { "type": "tool-call", "toolCallId": "tool-2", "toolName": "my-tool-2" } ] }, { "role": "tool", "content": [ { "type": "tool-result", "toolCallId": "tool-1", "toolName": "my-tool-1" }, { "type": "tool-result", "toolCallId": "tool-2", "toolName": "my-tool-2" } ] } ] ```
I also cannot reproduce the problem when testing it end-to-end.
The original code from @abhi-slash-git is incorrect, the order of the content items in the assistant message is wrong, it's not what I get when getting messages from an AI SDK call (from result.response.messages).
Also anthropic:claude-sonnet-4-20250514 should be anthropic/claude-sonnet-4-20250514 I assume?
full example with correct fixtures
const modelMessages: ModelMessage[] = [
{
role: "user",
content: [{ type: "text", text: "generate 10 items" }]
},
{
role: "assistant",
content: [
{
type: "text",
text: "I generated code for 10 items."
},
{
type: "tool-call",
toolCallId: "tool-example-123",
toolName: "json",
input: {
message: "generate 10 items"
}
}
]
},
{
role: "tool",
content: [
{
type: "tool-result",
toolCallId: "tool-example-123",
toolName: "json",
output: {
type: "json",
value: {
code: "export const code = () => [...]",
packageJson: "{}"
}
}
}
]
},
{
role: "user",
content: [{ type: "text", text: "generate 100 items" }]
}
];
try {
const result = await generateText({
messages: modelMessages,
model: "anthropic/claude-sonnet-4-20250514",
system: "You are a helpful assistant."
});
console.log("done");
} catch (error) {
console.error("tryCatch", error);
}
If you still see the problem, please share where you get the modelMessages from
@abhi-slash-git are you using the Cloudflare agents library by chance?
I believe I'm experiencing this error because their AI SDK 5 code is recreating message parts in an invalid/unexpected order. More details here.
Hey @gr2m !
Your full example with the correct fixtures does work for me, but only since you have a role: "user" block at the very end of the message array. When working with an iterative agent, I want the LLM to summarize the findings of the tool-results.
My current approach is that I feed the full message array, with the appended role: "tool": type: "tool-result" back to the LLM. But this is where the issue, happens (Only happens with Claude-3.7-Sonnet-Thinking ?) - the Antrophic API throws an error.
Can you elaborate on the correct structure for this behavior? My current hotfix is basically to add an empty user message at the end of the array - but that doesn't seem like an ideal fix.
Thanks in advance!
Me too
@fabiansimon @elimelt @abhi-slash-git the position of the final text in the assistant message in the top example is incorrect. at this point the assistant cannot have the tool result yet. instead it needs to be in a separate assistant message that comes after the tool message (before the final user message)
We are experiencing the same issue when using Anthropic. When reconstructing conversation history from our database, we group all tool calls in one assistant message, followed by all tool results in one tool message:
[
{ role: 'assistant', content: [toolCall1, toolCall2, toolCall3] },
{ role: 'tool', content: [toolResult1, toolResult2, toolResult3] }
]
This results in the following error:
{
"responseBody": "{\"type\":\"error\",\"error\":{\"type\":\"invalid_request_error\",\"message\":\"messages.3: `tool_use` ids were found without `tool_result` blocks immediately after: <tool_id>. Each `tool_use` block must have a corresponding `tool_result` block in the next message.\"},\"request_id\":\"<req_id>\"}",
"isRetryable": false,
"data": {
"type": "error",
"error": {
"type": "invalid_request_error",
"message": "messages.3: `tool_use` ids were found without `tool_result` blocks immediately after: <tool_id>. Each `tool_use` block must have a corresponding `tool_result` block in the next message."
}
}
}
But if we restructure the message array to interleave tool calls and results before passing it to the SDK like the following:
const messages = toolCallsAndResults.flatMap(([toolCall, toolResult]) => [
{ role: 'assistant', content: [toolCall] },
{ role: 'tool', content: [toolResult] }
])
it solves the Anthropic issue.
It would be great if the SDK could handle this transformation automatically based on the provider's requirements.
We were able to reproduce this issue using anthropic models claude-haiku-4-5-20251001 and claude-sonnet-4-20250514 using "@ai-sdk/anthropic": "^2.0.36"
https://github.com/vercel/ai/pull/9752/files reproduces the issue in the example. the test shows our conversion currently behaves as expected, but might need to be updated if some assumption on the anthropic side changed and caused this error.
@tatianainama i take that back. there was a bug in 9652. with the current example as is, no errors shows up.
thanks @lgrammel, I'll investigate more on our side and see what could be causing the problem.
I'm not doing any message massaging and on the latest SDK versions I am getting this when parallel tools calls happen in the same step.
As the response with multiple tool calls get streamed to frontend (useChat), and I add a subsequent message, the following streamText SDK call throws the same error as OP.
We had never seen this until we switched a large part of our workload over to Anthropic Haiku 4.5 so i think there might have been a behavior change with parallel tool calling with newer Anthropic models?
I have attempted to reproduce this issue again: #9961
Unfortunately the error does not appear. If someone could build on #9961 and create a reproduction that would be very helpful.
We just upgraded to latest SDK versions (our versions were 2 months old) and will let it sit in production for a bit and see if we continue to receive reports of the issue, if we do I will dig deeper and create a repro 👍
We are still seeing the issue after upgrading so I am now digging deeper into the issue to see if I can create a repro
any update on this? we're seeing this occasionally with:
@ai-sdk/anthropic": "^2.0.44" "ai": "^5.0.0"
I have had a real hard time trying to create a repro so unfortunately until someone is able to repro and add a failing test I don't think we will be able to resolve this one. I put in some hacks on our side to work around the problem to resolve it in the short term.
@ultrafro Are you persisting the messages to your DB and hydrating them back out? If so which format are you persisting to your DB?
Hey there, I just created a quick reproduction here
It's probably linked to how we currently persist messages in our DB, but with this structure, the issue is triggered every time we load a conversation with existing assistant messages.
I hope this helps. I'm also looking for feedback on the right way to persist messages in DB, if you notice I'm using a wrong pattern.