ai
ai copied to clipboard
convertToModelMessages fails with "no tool invocation found" when messages contain approval-responded parts
convertToModelMessages fails with "no tool invocation found" when messages contain approval-responded parts
Environment
- AI SDK: v6.0.0-beta.92
- Framework: Next.js 16.0.0
- React: 19.1.0
- Provider: Anthropic (claude-sonnet-4-5)
Bug Description
After approving a tool execution via addToolApprovalResponse(), calling convertToModelMessages() with the resulting message array throws:
Error: no tool invocation found for tool call toolu_01KALFHf9WPzeYEaSWsbBdhX
Reproduction
1. Define a tool with approval:
const backtestTool = tool({
description: "Run backtest analysis",
inputSchema: z.object({
snapshot_id: z.string(),
timeRange: z.string(),
}),
needsApproval: true,
execute: async ({ snapshot_id, timeRange }) => {
// implementation
}
});
2. Server-side route handler:
const { messages } = await req.json();
const result = streamText({
model: anthropicModel,
messages: convertToModelMessages(messages), // ← FAILS HERE
tools: { backtestSnapshot: backtestTool },
});
3. Client-side approval:
const chatHook = useChat({
sendAutomaticallyWhen: (options) =>
lastAssistantMessageIsCompleteWithApprovalResponses(options),
});
// User clicks Approve button:
chatHook.addToolApprovalResponse({
id: part.approval.id,
approved: true,
});
Expected Behavior
convertToModelMessages() should handle messages containing parts with state: "approval-responded" and allow the tool to execute.
Actual Behavior
Server throws Error: no tool invocation found for tool call toolu_... when processing messages with approval-responded parts.
Workaround Attempted
Filtering out approval-responded parts before conversion prevents the error but breaks the approval flow:
function filterApprovalRespondedParts(messages: any[]): any[] {
return messages.map((msg) => {
if (msg.role !== "assistant" || !msg.parts) return msg;
return {
...msg,
parts: msg.parts.filter((part: any) =>
!part.toolCallId || part.state !== "approval-responded"
),
};
});
}
const result = streamText({
messages: convertToModelMessages(filterApprovalRespondedParts(messages)),
// ...
});
Problem with workaround: The model has no memory of the approval and requests approval again, creating an infinite loop instead of executing the tool.
Impact
Tool execution approval is currently non-functional when using convertToModelMessages() in server-side route handlers.
try passing the tools in the 2nd arg of convertToModelMessages
Oh shoot. Thank you will test!
Hi @lgrammel. I'm also facing the same error, and I did try passing in the tools to the convertToModelMessages function, but I still get the same error. My setup is identical to @camwest's where I have tools that execute on the server side and I'm handling the automatic resending of the messages via the last lastAssistantMessageIsCompleteWithApprovalResponses function on the client side. Any suggestions?
Tried passing tools to convertToModelMessages but error persists:
const tools = {
web_search: webSearchTool,
...createQueryBuilderTools(conversationId, tiltUuid, tilt),
};
const convertedMessages = convertToModelMessages(messages, { tools });
const result = streamText({
model: anthropicModel,
messages: convertedMessages,
tools,
// ...
});
What happens:
convertToModelMessagescorrectly producestool-call,tool-approval-request,tool-approval-responsestructure- Tool executes successfully (backtest completes, stores result in DB)
- But streaming fails with
Error: no tool invocation found for tool call toolu_01QDDtB2CKVHf1z4FZWfgGnd - Browser gets
ERR_INCOMPLETE_CHUNKED_ENCODING
Additional symptom: UI shows duplicate assistant messages (same message ID rendered twice).
Error occurs during response piping phase after tool execution, not during message conversion.
@lgrammel Any thoughts?
Also seeing the same behavior as @camwest even after trying to pass tools into convertToModelMessages
Same
This is honestly cooked, hoping this can be fixed soon!
Is it possible that the problem only occurs when using Sonnet 4.5? @bdok23
I am investigating this issue since some time, this is very weird, as soon as I add onFinish it fails, even using this example:
This is fine:
return createAgentUIStreamResponse({
agent: weatherWithApprovalAgent,
messages,
});
Tool invocations work, as soon as I change it to this:
return createAgentUIStreamResponse({
agent: weatherWithApprovalAgent,
messages,
sendReasoning: false,
onFinish: async ({ messages, responseMessage }) => {
console.dir(messages, { depth: Infinity });
console.dir(responseMessage, { depth: Infinity });
},
});
It has tool invocation problem, I struggle to understand how is it related, why does it happen, I am not changing a thing, it's just logging.
Using the newest SDK beta version. ~~I'll create a proper issue once I am home~~, but that is strange.
EDIT: The onFinish worked after I added
originalMessages: messages,
generateMessageId: generateId,
But it's confusing, I am pretty sure it shouldn't be like this.
@ssg-chris I ran into the issue using gpt-5-mini as well
I was able to replicate the problem @ssg-chris for providing the details!
Changes go here: https://github.com/vercel/ai/blob/c98373afd40d553f8eeda4c8d3773064ad6b01e5/examples/next-openai/app/api/chat-tool-approval-dynamic/route.ts
Example to replicate is http://localhost:3000/test-tool-approval-dynamic (ask for weather at a location)
I can try to look into it later this week, but if anyone wants to give it a go and investigate further / send a PR, please go ahead
I ran into the issue using
gpt-5-minias well
@shanktt I was not replicate the problem when using openai('gpt-5-mini') in the same example.
Could you have a look at @kartikayy007's pull request and see if that resolves your problem?
- https://github.com/vercel/ai/pull/10203
Same issue here. Don't know if there's a workaround. Is there a way to manipulate the model messages before sending it to the model?