toolChoice: 'required' and { toolName: 'x', type: 'tool' } send ai vercel SDK into endless loop when using streamText
Description
toolChoice: 'required' and { toolName: 'x', type: 'tool' } send ai vercel SDK into endless loop when using streamText
Code example
const { fullStream } = await streamText({
abortSignal: signal,
experimental_toolCallStreaming: false,
maxSteps: 20,
messages: relevantMessages.slice(),
model
toolChoice: { toolName: 'webSearch', type: 'tool' },
tools: {
webSearch: tool({
description:
'Search the web for information. Useful for when you need to answer questions about current events or a specific topic. The answer includes sources, which must be referenced in the answer using markdown footnotes.',
execute: async ({ query }) => {
return {
answer: 'Paris',
};
},
parameters: z.object({
query: z
.string()
.describe(
'The query to search for. Phrased in natural language as a question.',
),
}),
}),
},
});
for await (const chunk of fullStream) {
console.log('>>> chunk', chunk);
}
AI provider
@ai-sdk/openai
Additional context
Logs:
{
type: 'tool-call',
toolCallId: 'call_b0DkOmitAwd93Mt4xXd6u1gH',
toolName: 'searchWeb',
args: { query: 'What is the capital of France?' }
}
{
type: 'tool-result',
toolCallId: 'call_b0DkOmitAwd93Mt4xXd6u1gH',
toolName: 'searchWeb',
args: { query: 'What is the capital of France?' },
result: {
answer: 'The capital of France is Paris.',
sources: [
{
name: 'What is the Capital of France? - WorldAtlas',
snippet: "Learn about the history, geography, economy, tourism, and administration of Paris, the capital city of France and the country's largest city.",
url: 'https://www.worldatlas.com/articles/what-is-the-capital-of-france.html'
},
{
name: 'France | History, Maps, Flag, Population, Cities, Capital, & Facts ...',
snippet: 'The capital and by far the most important city of France is Paris, one of the world’s preeminent cultural and commercial centres.',
url: 'https://www.britannica.com/place/France'
},
{
name: 'Paris - Wikipedia',
snippet: 'Paris is the capital and largest city of France.',
url: 'https://en.wikipedia.org/wiki/Paris'
}
]
}
}
{
type: 'step-finish',
finishReason: 'tool-calls',
usage: { promptTokens: 456, completionTokens: 20, totalTokens: 476 },
experimental_providerMetadata: { openai: { reasoningTokens: 0, cachedPromptTokens: 0 } },
logprobs: undefined,
response: {
id: 'chatcmpl-AZpStkvY5DoWFRuCTA0L5hk1hOMOI',
timestamp: 2024-12-02T01:24:35.000Z,
modelId: 'gpt-4o-2024-11-20'
},
isContinued: false
}
{
type: 'tool-call',
toolCallId: 'call_4jKKKQAlZwguD1glEhCPAbDI',
toolName: 'searchWeb',
args: { query: 'capital of France' }
}
Notice how it starts over with a new tool-call for the same question.
The same pattern repeats indefinitely.
Tool choice required means that you force tool calls every time the LLM is called. Together with maxSteps this enters such a loop. This is working as expected.
Can you describe what you want to achieve? there may be other approaches that could work here.
Suppose user sent a message: "What's the capital of France?"
User also ticked "Search", so I know their intent is to search.
I want to tell LLM that for this request it has to use webSearch tool.
Just leaving it as auto does not achieve the same result – it sometimes uses it, sometimes does not.
My understanding was that by setting tool to required or specific tool, I am telling LLM that to answer this question it must use a/the tool, but once the answer is resolved, it should not continue looping.
If you explicitly control that they want to use websearch through a flag, I recommend using a RAG approach vs a tool call approach.
Pseudo-code backend:
if (websearch) {
const searchResult = await runMyWebsearch(...);
prompt = buildMyPrompt(searchResult)
} else {
prompt = myDefaultPrompt
}
return streamText( ... , with system or prompt influenced by the above, or by augmenting the last user msg)
If you explicitly control that they want to use websearch through a flag, I recommend using a RAG approach vs a tool call approach.
Pseudo-code backend:
if (websearch) { const searchResult = await runMyWebsearch(...); prompt = buildMyPrompt(searchResult) } else { prompt = myDefaultPrompt } return streamText( ... , with system or prompt influenced by the above, or by augmenting the last user msg)
Do you happen to have any real-world examples of this?
@lgrammel after talking through this with folks on Discord, I really feel like this is a bug, or at least a limitation of AI vercel SDK.
The fact that AI sdk does not incorporate the results and just keeps looping in cycle feels broken.
The issue is that you specify the toolChoice for all steps, and it seems you only want to apply it to the first step. I'll think about a solution.
do we know if there is a way to turn it off after the first turn?
Ended up rewriting without using AI SDK, and just using OpenAI SDK. I swear, the code is about 100x times easier to reason about now.
I encountered the same problem.
https://sdk.vercel.ai/docs/ai-sdk-core/tools-and-tool-calling
This document describes how maxSteps can be used to invoke tools and allow the large model to analyze and summarize based on the tool results, but the toolChoice parameter disrupts this usage. Perhaps a parameter could be added to ensure a specific tool must be called (at least once) without disrupting the use of maxSteps?
There are some scenarios where you cannot rely on the model to determine whether to call a specific tool, such as when the user manually selects to use a private knowledge base to answer questions.
if possible please have a feature that solves this problem of calling the tools no matter what, for now i have just tried a prompt, but i do think it will be good in long run
Tool choice required means that you force tool calls every time the LLM is called. Together with maxSteps this enters such a loop. This is working as expected.
Can you describe what you want to achieve? there may be other approaches that could work here.
i think this would be pretty nice to be able to control what tools are called at a certain step
I solved this by doing:
const result = streamText({
maxSteps: toolChoice === 'auto' ? maxSteps : 1,
...
But I agree, it would be really nice if this worked as expected. There are plenty of use cases for calling a tool once based on a user interaction.
i think this would be pretty nice to be able to control what tools are called at a certain step
This is also a neat idea.
This is my current solution, add extra hint in user message, but don't save it in database.
// Helper function to add tool usage hint to the last user message
const addToolUsageHint = (coreMessages: any[], selectedTool: string, availableTools: Record<string, any>): void => {
// Check if selectedTool exists in available tools
if (!availableTools[selectedTool]) {
console.warn(`Selected tool "${selectedTool}" not found in available tools, ignoring`);
return;
}
if (coreMessages.length === 0) return;
const lastMessage = coreMessages[coreMessages.length - 1];
// Only modify user messages
if (lastMessage.role !== 'user') return;
const toolHint = `Please use the "${selectedTool}" tool to help with this request. `;
if (typeof lastMessage.content === 'string') {
// If content is a string, prepend the hint
lastMessage.content = toolHint + lastMessage.content;
} else if (Array.isArray(lastMessage.content)) {
// If content is an array, find the first text part and prepend the hint
const firstTextPart = lastMessage.content.find((part: any) =>
typeof part === 'object' && part !== null && 'type' in part && part.type === 'text'
);
if (firstTextPart && 'text' in firstTextPart) {
firstTextPart.text = toolHint + firstTextPart.text;
} else {
// If no text part found, add one at the beginning
lastMessage.content.unshift({
type: 'text',
text: toolHint
});
}
}
};
you can use prepareStep in ai sdk 5 for more flexibility, e.g. to force tool calls only in the first step
Why would toolChoice required enter an infinite loop? This seems like an anti-pattern. Our use case is we have a triage/orchestration agent at a top level, that delegates requests to other sub-agents. It would seem that toolChoice: required would force a tool(s) call until the request is satisfied. For now, we are not using toolChoice: required and are having our system prompt mandate calling a tool(s), but it feels like we should be able to use toolChoice: required w/o an infinite loop being hit...
I stumbled on this thread and found it to be helpful, but wanted to make it more explicit how I solved it. @lgrammel has the right idea:
const result = streamText({
model: openai("gpt-5-mini"),
messages: convertToModelMessages(messages),
system: systemPrompt,
tools: {
search: tool({
description: "Search sites given a query.",
inputSchema: z.object({
query: z.string().describe("a single search query"),
}),
execute: doSearch,
}),
},
// this will ensure that the first step of a new message will always require the tool call
prepareStep: ({ stepNumber }) =>
stepNumber === 1
? {
toolChoice: { type: "tool", toolName: "search" },
}
: {},
});