"no tool invocation found" error after approving tool with `needsApproval: true` in `createAgentUIStream`
Description
Description
When using createAgentUIStream with tools that require approval (needsApproval: true), after the user approves the tool call via addToolApprovalResponse, the tool executes successfully but the SDK throws an error when trying to process the tool result:
Error: no tool invocation found for tool call <toolCallId>
This error occurs in processUIMessageStream when processing tool-output-available chunks because getToolInvocation() cannot find the tool part in state.message.parts, which is empty.
Note: After combing through the codebase, this looks like it could be an oversight rather than intentional design. All other places in the codebase that call
toUIMessageStream()passoriginalMessages, and the receiving functions (handleUIMessageStreamFinish,getResponseUIMessageId) actively use it for core functionality like message continuation and ID generation.createAgentUIStreamis the only exception that doesn't pass this parameter, which causes a bug when processing tool results after approval.
Root Cause
In createAgentUIStream (located in src/agent/create-agent-ui-stream.ts), when calling result.toUIMessageStream(), the originalMessages parameter is not passed through. This causes:
- When the client sends back messages with
approval-respondedstate,createAgentUIStreamis called with those messages - The tool executes successfully and emits a
tool-output-availablechunk handleUIMessageStreamFinishreceivesoriginalMessages: [](empty array)createStreamingUIMessageStateinitializes withlastMessage: undefined(becauseoriginalMessagesis empty)state.message.partsis initialized as empty[]- When
tool-output-availablechunk arrives,getToolInvocation()searches in emptystate.message.partsand throws
Steps to Reproduce
- Set up a Next.js app with AI SDK v6 beta (tested with
[email protected]) - Create an agent with a tool that has
needsApproval: true:
import { tool } from 'ai';
import { z } from 'zod';
const agent = new ToolLoopAgent({
model: gateway('openai/gpt-4o-mini'),
tools: {
startRenewalWorkflow: {
...tool({
description: 'Kick off the durable renewal workflow',
inputSchema: z.object({
accountId: z.string(),
effectiveDate: z.string(),
// ... other fields
}),
execute: async (payload) => {
const run = await start(renewal, [payload]);
return { runId: run.runId };
},
}),
needsApproval: true,
},
},
});
- Use
createAgentUIStreamin an API route:
// app/api/chat/route.ts
import { createAgentUIStream, createUIMessageStreamResponse } from 'ai';
export async function POST(req: Request) {
const { messages } = await req.json();
const agentStream = await createAgentUIStream({
agent,
messages,
});
return createUIMessageStreamResponse({ stream: agentStream });
}
- On the client, use
useChatwithsendAutomaticallyWhen: lastAssistantMessageIsCompleteWithApprovalResponses - Trigger the tool call (e.g., "start a renewal workflow")
- Approve the tool call via
addToolApprovalResponse({ id: approvalId, approved: true }) - Observe the error: The tool executes successfully, but when the
tool-output-availablechunk arrives, the SDK throws:
Error: no tool invocation found for tool call call_djl820doN3Z1ogNgFlsk34jg
Expected Behavior
After approving a tool call:
- The tool should execute successfully ✅ (this works)
- The
tool-output-availablechunk should arrive ✅ (this works) - The SDK should find the tool invocation in
state.message.partsand update it with the result ❌ (this fails) - The tool result should stream back to the client and be displayed in the UI ❌ (this fails due to the error)
Actual Behavior
The tool executes successfully, but when processing the tool-output-available chunk:
state.message.partsis empty[]getToolInvocation()cannot find the tool part- Error is thrown:
Error: no tool invocation found for tool call <toolCallId> - The stream fails and no further messages are received
Logs/Evidence
From server logs during reproduction:
[AI SDK] createAgentUIStream: Input messages count: 4
[AI SDK] createAgentUIStream: Input messages: [
{
"id": "LQYAhirA6bs8OLkK",
"role": "assistant",
"parts": [
{
"type": "tool-startRenewalWorkflow",
"toolCallId": "call_r9idl27d9Z1ogNsl30g87fF3",
"state": "approval-responded",
"approval": { "id": "aitxt-D73UHLd68Soydj2l39fjdkl3", "approved": true }
}
]
}
]
[AI SDK] collectToolApprovals: Found approval responses: 1
[AI SDK] agent.stream: Executing approved tools: [
{ toolCallId: 'call_r9idl27d9Z1ogNsl30g87fF3', toolName: 'startRenewalWorkflow' }
]
[AI SDK] agent.stream: Tool execution result: { toolCallId: 'call_r9idl27d9Z1ogNsl30g87fF3', type: 'tool-result' }
[AI SDK] processUIMessageStream: Processing tool-output-available chunk
[AI SDK] processUIMessageStream: Current state.message.parts before lookup: [] // ❌ EMPTY!
[AI SDK] getToolInvocation: state.message.parts: [] // ❌ EMPTY!
[AI SDK] getToolInvocation: FAILED - No tool invocation found!
Error: no tool invocation found for tool call call_r9idl27d9Z1ogNsl30g87fF3
The key issue is visible here:
handleUIMessageStreamFinish: originalMessages count: 0(should be 4)createStreamingUIMessageState: lastMessage: null(should be the assistant message with the tool part)
Proposed Solution
Pass originalMessages: validatedMessages to toUIMessageStream() in createAgentUIStream:
File: src/agent/create-agent-ui-stream.ts
// Before
return result.toUIMessageStream(uiMessageStreamOptions);
// After
return result.toUIMessageStream({
...uiMessageStreamOptions,
originalMessages: validatedMessages
});
This ensures that when handleUIMessageStreamFinish initializes the state, it has access to the previous assistant message containing the tool invocation, allowing getToolInvocation() to find it when processing tool results.
Additional Context
- This is specifically related to the approval workflow feature in AI SDK v6
- The tool execution itself works correctly - the issue is only in matching the result back to the invocation
- This appears to be a missing parameter rather than a design flaw
- The fix is minimal and doesn't change the API surface
Questions
I'd like to confirm:
- Was excluding
originalMessagesintentional?- Is there a reason
createAgentUIStreamdoesn't passoriginalMessagestotoUIMessageStream? - Are there edge cases where passing
originalMessagescould cause issues?
- Is there a reason
AI SDK Version
- ai: 6.0.0-beta.99
- next: 16.0.0
Code of Conduct
- [x] I agree to follow this project's Code of Conduct
I had similar issues here:
https://github.com/vercel/ai/issues/9968#issuecomment-3506757436
Just curious, you confirmed that adding originalMessages works,
originalMessages: messages,
generateMessageId: generateId,
but what if you get rid of the onFinish callback? I had a nightmare with this.
Yeah I can confirm when I added
// src/agent/create-agent-ui-stream.ts
async function createAgentUIStream({
agent,
messages,
options,
...uiMessageStreamOptions
}) {
const validatedMessages = await validateUIMessages({
messages,
tools: agent.tools
});
const modelMessages = convertToModelMessages(validatedMessages, {
tools: agent.tools
});
const result = await agent.stream({
prompt: modelMessages,
options
});
return result.toUIMessageStream({
...uiMessageStreamOptions,
originalMessages: validatedMessages
});
}
Actually I just tried commenting out the onFinish callback to test things as you mentioned and it seems like it's working then! Barring anything I needed to do with the messages within that callback.
I tried passing originalMessages in my chat route like below but that didn't seem to make sense.
const agentStream = await createAgentUIStream({
agent,
messages: validatedMessages,
sendStart: true,
sendFinish: true,
onFinish: async ({ responseMessage }: any) => {
// Save the AI assistant's response after streaming completes
if (responseMessage && responseMessage.id && chatId) {
const assistantMessages = convertUIMessagesToNewMessages([responseMessage as any as ChatUIMessage], chatId);
await saveChatMessages({ messages: assistantMessages });
}
},
});
// @ts-ignore
return createUIMessageStreamResponse({ stream: agentStream, originalMessages: validatedMessages });
Yeah, that's insane, it works perfect unless you use onFinish, I would look into it if I had time, but it seems crazy. You can't return anything in the onFinish, so you can't even do
- onFinish -> data
- process data
- return original data
It just breaks in the middle
Could you have a look at @kartikayy007's pull request and see if that resolves your problem?
- https://github.com/vercel/ai/pull/10203
Could you have a look at @Kartikayy007's pull request and see if that resolves your problem?
Yes, this worked! Let me know what else you'd like to close this out 🫡
Yeah, that's insane, it works perfect unless you use onFinish, I would look into it if I had time, but it seems crazy. You can't return anything in the onFinish, so you can't even do
- onFinish -> data
- process data
- return original data
It just breaks in the middle
yeah, removing onFinish worked for me as well.
// before (breaks)
const start = Date.now();
return createAgentUIStreamResponse({
agent: Agent,
messages,
options: {
nowIso: new Date().toISOString(),
integrations: {
magento2: {
endpointUrl: data.endpointUrl,
accessToken: apiToken,
},
},
domain: routing.domain,
intent: routing.intent,
},
onFinish: () => {
const ms = Date.now() - start;
console.log("[orchestrator] full answer time:", ms, "ms");
},
});
// After (works)
const start = Date.now();
return createAgentUIStreamResponse({
agent: Agent,
messages,
options: {
nowIso: new Date().toISOString(),
integrations: {
magento2: {
endpointUrl: data.endpointUrl,
accessToken: apiToken,
},
},
domain: routing.domain,
intent: routing.intent,
},
});
Yes, this worked! Let me know what else you'd like to close this out 🫡
Marking this as closed then