ai icon indicating copy to clipboard operation
ai copied to clipboard

convertToModelMessages fails with "no tool invocation found" when messages contain approval-responded parts

Open camwest opened this issue 1 month ago • 14 comments

convertToModelMessages fails with "no tool invocation found" when messages contain approval-responded parts

Environment

  • AI SDK: v6.0.0-beta.92
  • Framework: Next.js 16.0.0
  • React: 19.1.0
  • Provider: Anthropic (claude-sonnet-4-5)

Bug Description

After approving a tool execution via addToolApprovalResponse(), calling convertToModelMessages() with the resulting message array throws:

Error: no tool invocation found for tool call toolu_01KALFHf9WPzeYEaSWsbBdhX

Reproduction

1. Define a tool with approval:

const backtestTool = tool({
  description: "Run backtest analysis",
  inputSchema: z.object({
    snapshot_id: z.string(),
    timeRange: z.string(),
  }),
  needsApproval: true,
  execute: async ({ snapshot_id, timeRange }) => {
    // implementation
  }
});

2. Server-side route handler:

const { messages } = await req.json();

const result = streamText({
  model: anthropicModel,
  messages: convertToModelMessages(messages), // ← FAILS HERE
  tools: { backtestSnapshot: backtestTool },
});

3. Client-side approval:

const chatHook = useChat({
  sendAutomaticallyWhen: (options) =>
    lastAssistantMessageIsCompleteWithApprovalResponses(options),
});

// User clicks Approve button:
chatHook.addToolApprovalResponse({
  id: part.approval.id,
  approved: true,
});

Expected Behavior

convertToModelMessages() should handle messages containing parts with state: "approval-responded" and allow the tool to execute.

Actual Behavior

Server throws Error: no tool invocation found for tool call toolu_... when processing messages with approval-responded parts.

Workaround Attempted

Filtering out approval-responded parts before conversion prevents the error but breaks the approval flow:

function filterApprovalRespondedParts(messages: any[]): any[] {
  return messages.map((msg) => {
    if (msg.role !== "assistant" || !msg.parts) return msg;
    return {
      ...msg,
      parts: msg.parts.filter((part: any) =>
        !part.toolCallId || part.state !== "approval-responded"
      ),
    };
  });
}

const result = streamText({
  messages: convertToModelMessages(filterApprovalRespondedParts(messages)),
  // ...
});

Problem with workaround: The model has no memory of the approval and requests approval again, creating an infinite loop instead of executing the tool.

Impact

Tool execution approval is currently non-functional when using convertToModelMessages() in server-side route handlers.

camwest avatar Nov 01 '25 21:11 camwest

try passing the tools in the 2nd arg of convertToModelMessages

lgrammel avatar Nov 01 '25 22:11 lgrammel

Oh shoot. Thank you will test!

camwest avatar Nov 01 '25 22:11 camwest

Hi @lgrammel. I'm also facing the same error, and I did try passing in the tools to the convertToModelMessages function, but I still get the same error. My setup is identical to @camwest's where I have tools that execute on the server side and I'm handling the automatic resending of the messages via the last lastAssistantMessageIsCompleteWithApprovalResponses function on the client side. Any suggestions?

namanpatel6 avatar Nov 02 '25 16:11 namanpatel6

Tried passing tools to convertToModelMessages but error persists:

const tools = {
  web_search: webSearchTool,
  ...createQueryBuilderTools(conversationId, tiltUuid, tilt),
};

const convertedMessages = convertToModelMessages(messages, { tools });

const result = streamText({
  model: anthropicModel,
  messages: convertedMessages,
  tools,
  // ...
});

What happens:

  1. convertToModelMessages correctly produces tool-call, tool-approval-request, tool-approval-response structure
  2. Tool executes successfully (backtest completes, stores result in DB)
  3. But streaming fails with Error: no tool invocation found for tool call toolu_01QDDtB2CKVHf1z4FZWfgGnd
  4. Browser gets ERR_INCOMPLETE_CHUNKED_ENCODING

Additional symptom: UI shows duplicate assistant messages (same message ID rendered twice).

Error occurs during response piping phase after tool execution, not during message conversion.

camwest avatar Nov 02 '25 17:11 camwest

@lgrammel Any thoughts?

namanpatel6 avatar Nov 03 '25 22:11 namanpatel6

Also seeing the same behavior as @camwest even after trying to pass tools into convertToModelMessages

shanktt avatar Nov 06 '25 20:11 shanktt

Same

ssg-chris avatar Nov 07 '25 00:11 ssg-chris

This is honestly cooked, hoping this can be fixed soon!

bdok23 avatar Nov 07 '25 03:11 bdok23

Is it possible that the problem only occurs when using Sonnet 4.5? @bdok23

ssg-chris avatar Nov 07 '25 08:11 ssg-chris

I am investigating this issue since some time, this is very weird, as soon as I add onFinish it fails, even using this example:

This is fine:

return createAgentUIStreamResponse({
    agent: weatherWithApprovalAgent,
    messages,
  });

Tool invocations work, as soon as I change it to this:

return createAgentUIStreamResponse({
    agent: weatherWithApprovalAgent,
    messages,
    sendReasoning: false,
    onFinish: async ({ messages, responseMessage }) => {
      console.dir(messages, { depth: Infinity });
      console.dir(responseMessage, { depth: Infinity });
    },
  });

It has tool invocation problem, I struggle to understand how is it related, why does it happen, I am not changing a thing, it's just logging.

Using the newest SDK beta version. ~~I'll create a proper issue once I am home~~, but that is strange.

EDIT: The onFinish worked after I added

    originalMessages: messages,
    generateMessageId: generateId,

But it's confusing, I am pretty sure it shouldn't be like this.

ssg-chris avatar Nov 08 '25 17:11 ssg-chris

@ssg-chris I ran into the issue using gpt-5-mini as well

shanktt avatar Nov 10 '25 17:11 shanktt

I was able to replicate the problem @ssg-chris for providing the details!

Changes go here: https://github.com/vercel/ai/blob/c98373afd40d553f8eeda4c8d3773064ad6b01e5/examples/next-openai/app/api/chat-tool-approval-dynamic/route.ts

Example to replicate is http://localhost:3000/test-tool-approval-dynamic (ask for weather at a location)

I can try to look into it later this week, but if anyone wants to give it a go and investigate further / send a PR, please go ahead

gr2m avatar Nov 11 '25 01:11 gr2m

I ran into the issue using gpt-5-mini as well

@shanktt I was not replicate the problem when using openai('gpt-5-mini') in the same example.

gr2m avatar Nov 11 '25 01:11 gr2m

Could you have a look at @kartikayy007's pull request and see if that resolves your problem?

  • https://github.com/vercel/ai/pull/10203

gr2m avatar Nov 14 '25 18:11 gr2m

Same issue here. Don't know if there's a workaround. Is there a way to manipulate the model messages before sending it to the model?

sabinayakc avatar Nov 28 '25 19:11 sabinayakc