ai icon indicating copy to clipboard operation
ai copied to clipboard

OpenAI provider: Item of type 'reasoning' was provided without its required following item

Open ShadowWalker2014 opened this issue 4 months ago • 28 comments

Description

we keep getting this error when using openai gpt-5 model with reasoning: Item 'rs_68b3836e52148191a533bfa8f266c7060be42a56df013daa' of type 'reasoning' was provided without its required following item.

Image

AI SDK Version

  • ai: 5.0.0 "@ai-sdk/anthropic": "^2.0.6", "@ai-sdk/google": "^2.0.8", "@ai-sdk/google-vertex": "^2.2.27", "@ai-sdk/groq": "^2.0.14", "@ai-sdk/mistral": "^2.0.9", "@ai-sdk/openai": "^2.0.19", "@ai-sdk/openai-compatible": "^1.0.11", "@ai-sdk/react": "^2.0.22",

Code of Conduct

  • [x] I agree to follow this project's Code of Conduct

ShadowWalker2014 avatar Aug 30 '25 23:08 ShadowWalker2014

Can you share a full code snippet that reproduces the issue? Is it always related to invalid tool calls? Which model are you using?

lgrammel avatar Sep 01 '25 07:09 lgrammel

Getting same issue - happens when a tool call fails.

Image Image

ben-edge avatar Sep 01 '25 15:09 ben-edge

if one of you could share a minimal reproducible test case it would help us to prioritize this issue

gr2m avatar Sep 02 '25 17:09 gr2m

Getting same error related:

Using GPT-5, low reasoning

Error [AI_APICallError]: Item 'rs_68b72f0a0fd48195b0fe82e74c730f700d2f7ee01550624d' of type 'reasoning' was provided without its required following item.
    at M.errorSchema (.next/server/chunks/208.js:33:7522)
    at async z (.next/server/chunks/208.js:33:4039)
    at async as.doStream (.next/server/chunks/208.js:43:20508)
    at async fn (.next/server/chunks/208.js:29:24427)
    at async (.next/server/chunks/208.js:27:26438)
    at async bS (.next/server/chunks/208.js:27:27987)
    at async e (.next/server/chunks/208.js:29:23597)
    at async Object.flush (.next/server/chunks/208.js:29:30788) {
  cause: undefined,
  url: 'https://api.openai.com/v1/responses',
  requestBodyValues: [Object],
  statusCode: 400,
  responseHeaders: [Object],
  responseBody: '{\n' +
    '  "error": {\n' +
    `    "message": "Item 'rs_68b72f0a0fd48195b0fe82e74c730f700d2f7ee01550624d' of type 'reasoning' was provided without its required following item.",\n` +
    '    "type": "invalid_request_error",\n' +
    '    "param": "input",\n' +
    '    "code": null\n' +
    '  }\n' +
    '}',
  isRetryable: false,
  data: [Object]
}

Theonlyhamstertoh avatar Sep 03 '25 05:09 Theonlyhamstertoh

It is just normal usage of the streamText() call with tools for me. Doesn't happen all the time so really can't give you something you can reliably reproduce. But it happens frequently enough in production everyday many issues like this get reported - which is a huge pain.

To reproduce, probably just run an agent with stopWhen more than 20 steps using any gpt-5* models with reasoning turned on and effort = low, summary = auto.

This will definitely show up in only 5-10 tries.

ShadowWalker2014 avatar Sep 04 '25 07:09 ShadowWalker2014

@gr2m @lgrammel here is a code snippet that provides the settings, how it helps you reproduce and see why:

import { streamText } from 'ai';

// Minimal reproduction example for GPT-5-mini streamText configuration
async function reproduceGpt5MiniIssue() {
  const streamConfig = {
    // Model configuration
    model: 'gpt-5-mini',

    // Messages array (your messages here)
    messages: [
      // ... your message array
    ],

    // Tools (if any)
    tools: {
      // ... your tools
    },

    // GPT-5 specific parameters
    maxCompletionTokens: 128000, // Fixed max output for GPT-5-mini
    temperature: 1, // Only supported value for GPT-5

    // CRITICAL: GPT-5-mini provider settings
    providerOptions: {
      openai: {
        reasoningEffort: 'low',     // Key setting for GPT-5-mini
        reasoningSummary: 'auto'   // Required for proper reasoning structure
      }
    },

    // Step preparation (where context loss may occur)
    prepareStep: ({ steps, stepNumber, model, messages }) => {
      // Your prepareStep logic here
      return {};
    }
  };

  // Execute streamText with GPT-5-mini configuration
  const result = streamText(streamConfig);

  // Return the stream
  return result.toUIMessageStream();
}

// Export for use
export { reproduceGpt5MiniIssue };

ShadowWalker2014 avatar Sep 04 '25 07:09 ShadowWalker2014

I would echo this with my case. I am finding it without setting any of the openai provider options. I couldn't create a basic repro case but happens when using a lot of tool calls and then one of them fails. Seems linked to this issue: https://github.com/vercel/ai/issues/7099

ben-edge avatar Sep 04 '25 09:09 ben-edge

Just fixed this error for myself.

For me it was caused by persisting two reasoning parts with different itemIds side by side. This happened because OpenAI's code interpreter tool call wasn't returning from streamText for me. So if my agent reasoned, then called code interpreter, then reasoned again, my message history saved two reasoning items side by side and went into error state.

I solved it by moving to E2B containers, which always show up as tool calls.

TheSlavant avatar Sep 07 '25 01:09 TheSlavant

I traced back the error to incorrect tool call (tool call fixing) step:

{"output":{"type":"error-text","value":"Invalid input for tool write_todo: Type validation failed: Value: {\"merge\":true,\"todos\":[{\"id\":\"export-1\",\"status\":\"completed\"},{\"id\":\"export-2\",\"status\":\"completed\"},{\"id\":\"export-3\",\"status\":\"in_progress\"}]}.\nError message: [\n  {\n    \"code\": \"invalid_type\",\n    \"expected\": \"string\",\n    \"received\": \"undefined\",\n    \"path\": [\n      \"todos\",\n      0,\n      \"content\"\n    ],\n    \"message\": \"Required\"\n  },\n  {\n    \"code\": \"invalid_type\",\n    \"expected\": \"string\",\n    \"received\": \"undefined\",\n    \"path\": [\n      \"todos\",\n      1,\n      \"content\"\n    ],\n    \"message\": \"Required\"\n  },\n  {\n    \"code\": \"invalid_type\",\n    \"expected\": \"string\",\n    \"received\": \"undefined\",\n    \"path\": [\n      \"todos\",\n      2,\n      \"content\"\n    ],\n    \"message\": \"Required\"\n  }\n]"},"toolCallId":"call_jFiaS2WKBZDR0EZfcWetwbOt","toolName":"write_todo","type":"tool-result"}

Whenever model provided incorrect tool input args, this reliably triggers!

Might be related to ai sdk tool call fixing?

ShadowWalker2014 avatar Sep 07 '25 22:09 ShadowWalker2014

For me, this happens when I stop/abort the request in the middle of reasoning and just add a new message in the chat. easily can be reproduced in ai-sdk-reasoning-starter.

p.s. Both ai-sdk-reasoning-starter and ai-chatbot are set up so that if reasoning is already in progress, the stop/abort button is disabled. I didn’t want users to be forced to wait until streaming finishes if they change their mind, so I removed that disable logic. As a result, the issue only shows up when you stop the request during reasoning.

aramvr avatar Sep 08 '25 18:09 aramvr

I resolved this by enforcing argument bounds with Zod at the tool boundary. Smaller models like gpt-5-nano were generating args that exceeded schema limits, which led to invalid tool calls and then the “reasoning without following item” error. Example: the model returned 5 queries for a vector search while the schema caps it at 4. Preprocessing the args to clamp to the schema (or using a more capable model that respects constraints) eliminated the issue entirely.

neddes avatar Sep 09 '25 15:09 neddes

I don’t get why this issue isn’t treated with more urgency. Can someone from the Vercel team explain:

  • Why does this happen?
  • What’s blocking the fix?
  • Is there any quick workaround before a proper fix?

Right now, it completely blocks us from using the GPT-5 model both directly or with Vercel AI Gateway. This should really get more attention.

aramvr avatar Sep 13 '25 10:09 aramvr

@aramvr i see that they had this PR which they verified to have fixed the issue. However after upgrading to latest version I still see this issue a lot

dddkhoa avatar Sep 20 '25 12:09 dddkhoa

This might be related to https://github.com/vercel/ai/issues/8811

iamcrisb avatar Sep 21 '25 22:09 iamcrisb

Can you share a fully working example that we can run that reproduces a problem. @ShadowWalker2014 your example had several type errors.

Here is what I run and I didn't get an error

import { OpenAIProviderOptions } from '@ai-sdk/openai/internal';
import { streamText } from 'ai';
import 'dotenv/config';

// Minimal reproduction example for GPT-5-mini streamText configuration
async function main() {
  // Execute streamText with GPT-5-mini configuration
  const result = streamText({
    // Model configuration
    model: 'gpt-5-mini',

    // Messages array (your messages here)
    messages: [
      { role: 'system', content: 'You are a helpful assistant.' },
      { role: 'user', content: 'Explain the theory of relativity.' }
    ],

    // GPT-5 specific parameters
    temperature: 1,

    // CRITICAL: GPT-5-mini provider settings
    providerOptions: {
      openai: {
        reasoningEffort: 'low',
        maxCompletionTokens: 128000,
      } satisfies OpenAIProviderOptions
    },
  });


  console.log();
  console.log('Token usage:', await result.usage);
  console.log('Finish reason:', await result.finishReason);
}

main().catch(console.error)

The problem I've seen in other cases is that messages came from a database or other sources instead of a prior AI SDK call

gr2m avatar Sep 22 '25 00:09 gr2m

Here's a small repo @gr2m

import { openai } from "@ai-sdk/openai";
import type { OpenAIProviderOptions } from "@ai-sdk/openai/internal";
import { streamText, type ModelMessage, type Tool } from "ai";
import "dotenv/config";

// Minimal reproduction example for GPT-5-mini streamText configuration
async function main() {
  const messages: ModelMessage[] = [
    { role: "system", content: "You are a helpful assistant." },
    {
      role: "user",
      content: "How are you?",
    },
  ];
  let iterations = 0;
  while (iterations < 3) {
    iterations++;
    const result = streamText({
      // Model configuration
      model: openai("gpt-5"),

      // Messages array (your messages here)
      messages,

      tools: {
        web_search_preview: openai.tools.webSearch({}) as Tool<{}, unknown>,
      },
      providerOptions: {
        openai: {
          reasoningEffort: "medium",
        } satisfies OpenAIProviderOptions,
      },
      onError: (error) => {
        console.error("============ HERE COMES THE ERROR ==========================");
        console.error("Error:", error);
        console.error("================ STATE OF MESSAGES ==========================");
        console.dir(messages, { depth: null });
        console.error("============ HERE FINISHES THE ERROR ==========================");
      },
      onFinish: (opts) => {
        messages.push(...opts.response.messages);
      },
    });

    console.log("Iteration:", iterations);
    console.log("Token usage:", await result.usage);
    console.log("Finish reason:", await result.finishReason);
    console.log("Result:", await result.text);
    messages.push({
      role: "user",
      content: "Can you find one news article about React (js framework) and return the headline?",
    });
  }

  console.log();
}

main().catch(console.error);

Logs:

Clearly in this case is happening from the model calling the wrong tool it seems, if I change the tool name to just web_search the error goes aways, but still perhaps this error mode should never happen?

Iteration: 1
Token usage: {
  inputTokens: 4438,
  outputTokens: 21,
  totalTokens: 4459,
  reasoningTokens: 0,
  cachedInputTokens: 4352
}
Finish reason: stop
Result: I’m doing well, thanks for asking! How are you doing today?
Iteration: 2
Token usage: {
  inputTokens: 117429,
  outputTokens: 1308,
  totalTokens: 118737,
  reasoningTokens: 1280,
  cachedInputTokens: 93568
}
Finish reason: stop
Result: React 19 builds on async transitions. ([infoworld.com](https://www.infoworld.com/article/2337306/react-19-builds-on-async-transitions.html?utm_source=openai))
Iteration: 3
============ HERE COMES THE ERROR ==========================
Error: {
  error: APICallError [AI_APICallError]: Item 'rs_68d46ffe895081a3ba72c0b5beee0bce0e58292eb8bc7725' of type 'reasoning' was provided without its required following item.
      at file:///some-folder/node_modules/@ai-sdk/provider-utils/dist/index.mjs:847:14
      at process.processTicksAndRejections (node:internal/process/task_queues:105:5)
      at async postToApi (file:///some-folder/node_modules/@ai-sdk/provider-utils/dist/index.mjs:699:28)
      at async OpenAIResponsesLanguageModel.doStream (file:///some-folder/node_modules/@ai-sdk/openai/dist/index.mjs:2923:50)
      at async fn (file:///some-folder/apps/some-app/node_modules/ai/dist/index.mjs:4919:27)
      at async file:///some-folder/apps/some-app/node_modules/ai/dist/index.mjs:1513:22
      at async _retryWithExponentialBackoff (file:///some-folder/apps/some-app/node_modules/ai/dist/index.mjs:1664:12)
      at async streamStep (file:///some-folder/apps/some-app/node_modules/ai/dist/index.mjs:4875:15)
      at async fn (file:///some-folder/apps/some-app/node_modules/ai/dist/index.mjs:5216:9)
      at async file:///some-folder/apps/some-app/node_modules/ai/dist/index.mjs:1513:22 {
    cause: undefined,
    url: 'https://api.openai.com/v1/responses',
    requestBodyValues: {
      model: 'gpt-5',
      input: [Array],
      temperature: undefined,
      top_p: undefined,
      max_output_tokens: undefined,
      max_tool_calls: undefined,
      metadata: undefined,
      parallel_tool_calls: undefined,
      previous_response_id: undefined,
      store: undefined,
      user: undefined,
      instructions: undefined,
      service_tier: undefined,
      include: [Array],
      prompt_cache_key: undefined,
      safety_identifier: undefined,
      top_logprobs: undefined,
      reasoning: [Object],
      tools: [Array],
      tool_choice: 'auto',
      stream: true
    },
    statusCode: 400,
    responseHeaders: {
      [REDACTED]
    },
    responseBody: '{\n' +
      '  "error": {\n' +
      `    "message": "Item 'rs_68d46ffe895081a3ba72c0b5beee0bce0e58292eb8bc7725' of type 'reasoning' was provided without its required following item.",\n` +
      '    "type": "invalid_request_error",\n' +
      '    "param": "input",\n' +
      '    "code": null\n' +
      '  }\n' +
      '}',
    isRetryable: false,
    data: { error: [Object] },
    Symbol(vercel.ai.error): true,
    Symbol(vercel.ai.error.AI_APICallError): true
  }
}
================ STATE OF MESSAGES ==========================
[
  { role: 'system', content: 'You are a helpful assistant.' },
  { role: 'user', content: 'How are you?' },
  {
    role: 'assistant',
    content: [
      {
        type: 'reasoning',
        text: '',
        providerOptions: {
          openai: {
            itemId: 'rs_68d46ffd200081a38d5f9296f90afd320e58292eb8bc7725',
            reasoningEncryptedContent: null
          }
        }
      },
      {
        type: 'text',
        text: 'I’m doing well, thanks for asking! How are you doing today?',
        providerOptions: {
          openai: {
            itemId: 'msg_68d46ffd92dc81a3866fb4853fae868b0e58292eb8bc7725'
          }
        }
      }
    ]
  },
  {
    role: 'user',
    content: 'Can you find one news article about React (js framework) and return the headline?'
  },
  {
    role: 'assistant',
    content: [
      {
        type: 'reasoning',
        text: '',
        providerOptions: {
          openai: {
            itemId: 'rs_68d46ffe895081a3ba72c0b5beee0bce0e58292eb8bc7725',
            reasoningEncryptedContent: null
          }
        }
      },
      {
        type: 'tool-call',
        toolCallId: 'ws_68d4700344b481a39b6cece51f24a1620e58292eb8bc7725',
        toolName: 'web_search',
        input: {
          action: { type: 'search', query: 'React 19 release news 2025' }
        },
        providerExecuted: undefined,
        providerOptions: undefined
      },
      {
        type: 'tool-result',
        toolCallId: 'ws_68d4700344b481a39b6cece51f24a1620e58292eb8bc7725',
        toolName: 'web_search',
        output: { type: 'json', value: { status: 'completed' } },
        providerExecuted: true,
        providerOptions: undefined
      },
      {
        type: 'reasoning',
        text: '',
        providerOptions: {
          openai: {
            itemId: 'rs_68d4700536d481a382b32453439c65c80e58292eb8bc7725',
            reasoningEncryptedContent: null
          }
        }
      },
      {
        type: 'tool-call',
        toolCallId: 'ws_68d47006c6e881a3815d09d79df439c10e58292eb8bc7725',
        toolName: 'web_search',
        input: {
          action: { type: 'search', query: 'site:techcrunch.com React 2025' }
        },
        providerExecuted: undefined,
        providerOptions: undefined
      },
      {
        type: 'tool-result',
        toolCallId: 'ws_68d47006c6e881a3815d09d79df439c10e58292eb8bc7725',
        toolName: 'web_search',
        output: { type: 'json', value: { status: 'completed' } },
        providerExecuted: true,
        providerOptions: undefined
      },
      {
        type: 'reasoning',
        text: '',
        providerOptions: {
          openai: {
            itemId: 'rs_68d47008ee7881a3b5493d8d08afd7280e58292eb8bc7725',
            reasoningEncryptedContent: null
          }
        }
      },
      {
        type: 'tool-call',
        toolCallId: 'ws_68d4700b74d081a3b8c59566f16774c70e58292eb8bc7725',
        toolName: 'web_search',
        input: {
          action: {
            type: 'search',
            query: 'site:infoworld.com React 19 stable 2025'
          }
        },
        providerExecuted: undefined,
        providerOptions: undefined
      },
      {
        type: 'tool-result',
        toolCallId: 'ws_68d4700b74d081a3b8c59566f16774c70e58292eb8bc7725',
        toolName: 'web_search',
        output: { type: 'json', value: { status: 'completed' } },
        providerExecuted: true,
        providerOptions: undefined
      },
      {
        type: 'reasoning',
        text: '',
        providerOptions: {
          openai: {
            itemId: 'rs_68d4700d6ca081a3a27cb950193202ec0e58292eb8bc7725',
            reasoningEncryptedContent: null
          }
        }
      },
      {
        type: 'tool-call',
        toolCallId: 'ws_68d4700de16081a3b3538ef8ab5515570e58292eb8bc7725',
        toolName: 'web_search',
        input: {
          action: { type: 'search', query: 'site:theregister.com React 19' }
        },
        providerExecuted: undefined,
        providerOptions: undefined
      },
      {
        type: 'tool-result',
        toolCallId: 'ws_68d4700de16081a3b3538ef8ab5515570e58292eb8bc7725',
        toolName: 'web_search',
        output: { type: 'json', value: { status: 'completed' } },
        providerExecuted: true,
        providerOptions: undefined
      },
      {
        type: 'reasoning',
        text: '',
        providerOptions: {
          openai: {
            itemId: 'rs_68d4700f8e7481a3940ec6a51c71ae230e58292eb8bc7725',
            reasoningEncryptedContent: null
          }
        }
      },
      {
        type: 'tool-call',
        toolCallId: 'ws_68d470140cdc81a3b60629e845a1dde60e58292eb8bc7725',
        toolName: 'web_search',
        input: {
          action: { type: 'search', query: 'React 19.1 news InfoWorld' }
        },
        providerExecuted: undefined,
        providerOptions: undefined
      },
      {
        type: 'tool-result',
        toolCallId: 'ws_68d470140cdc81a3b60629e845a1dde60e58292eb8bc7725',
        toolName: 'web_search',
        output: { type: 'json', value: { status: 'completed' } },
        providerExecuted: true,
        providerOptions: undefined
      },
      {
        type: 'reasoning',
        text: '',
        providerOptions: {
          openai: {
            itemId: 'rs_68d47015dff081a386ab119f525114200e58292eb8bc7725',
            reasoningEncryptedContent: null
          }
        }
      },
      {
        type: 'tool-call',
        toolCallId: 'ws_68d470182ab481a38a587895c8fa47590e58292eb8bc7725',
        toolName: 'web_search',
        input: {
          action: {
            type: 'search',
            query: 'site:infoworld.com React 19.1 owner stack news'
          }
        },
        providerExecuted: undefined,
        providerOptions: undefined
      },
      {
        type: 'tool-result',
        toolCallId: 'ws_68d470182ab481a38a587895c8fa47590e58292eb8bc7725',
        toolName: 'web_search',
        output: { type: 'json', value: { status: 'completed' } },
        providerExecuted: true,
        providerOptions: undefined
      },
      {
        type: 'reasoning',
        text: '',
        providerOptions: {
          openai: {
            itemId: 'rs_68d4701a3e5081a3987ca95aa258e1040e58292eb8bc7725',
            reasoningEncryptedContent: null
          }
        }
      },
      {
        type: 'text',
        text: 'React 19 builds on async transitions. ([infoworld.com](https://www.infoworld.com/article/2337306/react-19-builds-on-async-transitions.html?utm_source=openai))',
        providerOptions: {
          openai: {
            itemId: 'msg_68d4701fbf3481a382f2b902cbb060450e58292eb8bc7725'
          }
        }
      }
    ]
  },
  {
    role: 'tool',
    content: [
      {
        type: 'tool-result',
        toolCallId: 'ws_68d4700344b481a39b6cece51f24a1620e58292eb8bc7725',
        toolName: 'web_search',
        output: {
          type: 'error-text',
          value: "Model tried to call unavailable tool 'web_search'. Available tools: web_search_preview."
        }
      },
      {
        type: 'tool-result',
        toolCallId: 'ws_68d47006c6e881a3815d09d79df439c10e58292eb8bc7725',
        toolName: 'web_search',
        output: {
          type: 'error-text',
          value: "Model tried to call unavailable tool 'web_search'. Available tools: web_search_preview."
        }
      },
      {
        type: 'tool-result',
        toolCallId: 'ws_68d4700b74d081a3b8c59566f16774c70e58292eb8bc7725',
        toolName: 'web_search',
        output: {
          type: 'error-text',
          value: "Model tried to call unavailable tool 'web_search'. Available tools: web_search_preview."
        }
      },
      {
        type: 'tool-result',
        toolCallId: 'ws_68d4700de16081a3b3538ef8ab5515570e58292eb8bc7725',
        toolName: 'web_search',
        output: {
          type: 'error-text',
          value: "Model tried to call unavailable tool 'web_search'. Available tools: web_search_preview."
        }
      },
      {
        type: 'tool-result',
        toolCallId: 'ws_68d470140cdc81a3b60629e845a1dde60e58292eb8bc7725',
        toolName: 'web_search',
        output: {
          type: 'error-text',
          value: "Model tried to call unavailable tool 'web_search'. Available tools: web_search_preview."
        }
      },
      {
        type: 'tool-result',
        toolCallId: 'ws_68d470182ab481a38a587895c8fa47590e58292eb8bc7725',
        toolName: 'web_search',
        output: {
          type: 'error-text',
          value: "Model tried to call unavailable tool 'web_search'. Available tools: web_search_preview."
        }
      }
    ]
  },
  {
    role: 'user',
    content: 'Can you find one news article about React (js framework) and return the headline?'
  }
]
============ HERE FINISHES THE ERROR ==========================
NoOutputGeneratedError [AI_NoOutputGeneratedError]: No output generated. Check the stream for errors.
    at Object.flush (file:///some-folder/apps/some-app/node_modules/ai/dist/index.mjs:4684:27)
    at invokePromiseCallback (node:internal/webstreams/util:172:10)
    at Object.<anonymous> (node:internal/webstreams/util:177:23)
    at transformStreamDefaultSinkCloseAlgorithm (node:internal/webstreams/transformstream:621:43)
    at node:internal/webstreams/transformstream:379:11
    at writableStreamDefaultControllerProcessClose (node:internal/webstreams/writablestream:1162:28)
    at writableStreamDefaultControllerAdvanceQueueIfNeeded (node:internal/webstreams/writablestream:1253:5)
    at writableStreamDefaultControllerClose (node:internal/webstreams/writablestream:1220:3)
    at writableStreamClose (node:internal/webstreams/writablestream:722:3)
    at writableStreamDefaultWriterClose (node:internal/webstreams/writablestream:1091:10) {
  cause: undefined,
  Symbol(vercel.ai.error): true,
  Symbol(vercel.ai.error.AI_NoOutputGeneratedError): true
}

</details>

muniter avatar Sep 24 '25 22:09 muniter

Hi everyone,

From platform.openai.com, using gpt-5* with any level of reasoning and tool calling. Allow the model to produce a reasoning item and a tool call. If you try to edit this tool call you will see the following warning:

Editing this message will remove reasoning items, since the content following them must remain unchanged to support future requests.

This warning is very clear. The reasoning item ONLY works with the EXACT tool call the model produced. When using experimental_repairToolCall we are sometimes updating the tool call (to fix it). This is what causes in the error we see.

Solution (@gr2m):

The experimental_repairToolCall cannot return ONLY the tool call. It must return both the NEW reasoning item that produced the new tool call, as well as the new tool call. Otherwise you would have to drop the reasoning item completely.

Workaround:

For now I am keeping track of the last message array through the prepareStep hook, catch this specific error type and remove the problematic reasoning item from the message content. Granted, the reasoning item is lost but it beats failing completely.

Here is a code example:

type Options = Partial<Parameters<typeof generateText>[0]>
  & ({
    prompt: string | ModelMessage[]
  } | {
    messages: ModelMessage[]
  });

const generateTextWrapper = async (options: Options) => {
  let lastMessages: ModelMessage[] = [];

  try {
    return await generateText({
      ...options,
      model,
      system: systemPrompt,
      tools,
      stopWhen: stepCountIs(25),
      providerOptions: getProviderOptions(model.modelId),
      experimental_repairToolCall: repairToolCallByReAskStrategy(model.modelId),
      prepareStep: ({ messages }) => {
        // Capture the last messages in case of "reasoning" followed by "tool-call" error
        lastMessages = messages;
        return {};
      },
    });
  } catch (error) {
      if (
        error.name === "AI_APICallError"
        && error.message.includes("type 'reasoning' was provided without its required following item")
      ) {
        console.log("Attempting to repair last message with reasoning followed by tool-call");

        let lastProblematicIndex = -1;

        for (let i = lastMessages.length - 1; i >= 0; i--) {
          const message = lastMessages[i];
          if (message.role === "assistant" && Array.isArray(message.content)) {
            for (let j = 0; j < message.content.length - 1; j++) {
              const part = message.content[j];
              const nextPart = message.content[j + 1];

              if (part.type === "reasoning" && nextPart?.type === "tool-call") {
                lastProblematicIndex = i;
                break;
              }
            }

            if (lastProblematicIndex !== -1) {
              break;
            }
          }
        }

        if (lastProblematicIndex !== -1) {
          const cleanedMessages: ModelMessage[] = lastMessages.map((message, index) => {
            if (index === lastProblematicIndex && message.role === "assistant" && Array.isArray(message.content)) {
              const cleanedContent: typeof message.content = [];

              for (let j = 0; j < message.content.length; j++) {
                const part = message.content[j];
                const nextPart = message.content[j + 1];

                if (part.type === "reasoning" && nextPart?.type === "tool-call") {
                  console.log("Removing reasoning item followed by tool-call from last problematic message");
                  continue;
                }

                cleanedContent.push(part);
              }

              return {
                ...message,
                content: cleanedContent,
              };
            }
            return message;
          });

          console.log("Retrying with cleaned messages");
          return generateTextWrapper({
            messages: cleanedMessages,
          });
        }
      }
    }
    throw error;
  }
}

apostolisCodpal avatar Oct 06 '25 10:10 apostolisCodpal

Hey everyone, I am also seeing this behaviour if the LLM supplies an invalid tool call input that validated within the SDK, I get an identical error. That failed zod validation is being converted into a tool-result or a tool-call that OpenAI considers problematic. @apostolisCodpal's error handling works on this too.

changesbyjames avatar Oct 08 '25 15:10 changesbyjames

I'm still getting this in "ai": "^5.0.80", "@ai-sdk/openai": "^2.0.53" with just a simple loop that appends response.messages to what will be the input of the next call.

I've had to pull the ai sdk due to lack of reliability of just simply running in a loop to randomly.

@lgrammel can we expect a fix for this? Or is v5 not being worked on due to upcoming v6 (which may still have this issue)

danthegoodman1 avatar Nov 02 '25 22:11 danthegoodman1

Here's a heavily redacted input that was used:

[
  {
    "role": "system",
    "content": "..."
  },
  {
    "role": "user",
    "content": "..."
  },
  {
    "role": "assistant",
    "content": [
      {
        "type": "reasoning",
        "text": "...",
        "providerOptions": {
          "openai": {
            "itemId": "rs_0cf907ee399203bc006907decd62e081a28b835083e94d8dff",
            "reasoningEncryptedContent": null
          }
        }
      },
      {
        "type": "tool-call",
        "toolCallId": "call_7g6jYDqjrB7ZdxLQyk5s4Anu",
        "toolName": "read_file",
        "input": {
          "path": "....md"
        },
        "providerOptions": {
          "openai": {
            "itemId": "fc_0cf907ee399203bc006907ded09aac81a2b5d1e0382d9c5f62"
          }
        }
      }
    ]
  },
  {
    "role": "tool",
    "content": [
      {
        "type": "tool-result",
        "toolCallId": "call_7g6jYDqjrB7ZdxLQyk5s4Anu",
        "toolName": "read_file",
        "output": {
          "type": "text",
          "value": " ..."
        }
      }
    ]
  },
  {
    "role": "assistant",
    "content": [
      {
        "type": "reasoning",
        "text": "...",
        "providerOptions": {
          "openai": {
            "itemId": "rs_0cf907ee399203bc006907ded2318881a2ad08d70e3aee53fc",
            "reasoningEncryptedContent": null
          }
        }
      },
      {
        "type": "reasoning",
        "text": "...",
        "providerOptions": {
          "openai": {
            "itemId": "rs_0cf907ee399203bc006907ded2318881a2ad08d70e3aee53fc",
            "reasoningEncryptedContent": null
          }
        }
      },
      {
        "type": "reasoning",
        "text": "...",
        "providerOptions": {
          "openai": {
            "itemId": "rs_0cf907ee399203bc006907ded2318881a2ad08d70e3aee53fc",
            "reasoningEncryptedContent": null
          }
        }
      },
      {
        "type": "reasoning",
        "text": "...",
        "providerOptions": {
          "openai": {
            "itemId": "rs_0cf907ee399203bc006907ded2318881a2ad08d70e3aee53fc",
            "reasoningEncryptedContent": null
          }
        }
      },
      {
        "type": "tool-call",
        "toolCallId": "call_M4zXnm2tGTsfdBFGZVrGFQW3",
        "toolName": "extractTags",
        "input": {
          "tags": [
            ... strings
          ]
        }
      }
    ]
  },
  {
    "role": "tool",
    "content": [
      {
        "type": "tool-result",
        "toolCallId": "call_M4zXnm2tGTsfdBFGZVrGFQW3",
        "toolName": "extractTags",
        "output": {
          "type": "error-text",
          "value": "Invalid input for tool extractTags: Type validation failed: Value: ... Error message: [\n  {\n    \"expected\": \"record\",\n    \"code\": \"invalid_type\",\n    \"path\": [\n      \"definitions\"\n    ],\n    \"message\": \"Invalid input: expected record, received undefined\"\n  }\n]"
        }
      }
    ]
  }
]

Note that while it failed the tool call, I've been getting it with successful tool call parameters as well. However it appears that when validation fails, I always get run into this error.

danthegoodman1 avatar Nov 02 '25 22:11 danthegoodman1

still an issue in the v6 beta

danthegoodman1 avatar Nov 07 '25 17:11 danthegoodman1

hi @danthegoodman1, do you mind sending a code reproduction? what tool definitions are you using? i want to understand the flow of how you're getting these messages array. it can be heavily redacted as far as the core logic goes of your program

aayush-kapoor avatar Nov 10 '25 17:11 aayush-kapoor

@aayush-kapoor let me know if this is sufficient, the context it's being deployed in is highly sensitive, but I could share a bit more over DMs if needed.

We have the agent simply being called in a loop (we need manual control because the ai sdk does not allow per-turn abort signals and request hedging):

let _messages = [...]
for (...) {
  const result = await this.config.generateText(...)
  _messages.push(...result.response.messages)
  if (no tool calls in response) {
    return result.text
  }
}

The tool definition that is causing problems is:

const extractTags = tool({
  inputSchema: z.object({
  tags: z
    .array(z.string().describe("The exact text of the request as it appears in the document, without any additional explanations or definitions. This should be ONLY the request text itself."))
    .describe("The tags that were extracted from the document"),
  definitions: z
    .record(z.string(), z.string())
    .describe("The definitions that were extracted from the document"),
})
})

What I find is that gpt-5 doesn't provide the definitions at all (despite clearly seeing it in the reasoning summarye... gemini and claude do this fine), which causes the invalid tool input response. However, what ever the bug in the AI sdk is means that when the result of _messages.push(...result.response.messages) is fed back into the next loop iteration, it bricks the context. I haven't seen this issue when tools have been properly called by the model.

danthegoodman1 avatar Nov 10 '25 17:11 danthegoodman1

I think you can probably more or less recreate this with the above tool by just having it call this tool in some simulated test, then modifying the response to the above error, then feeding it back in

danthegoodman1 avatar Nov 10 '25 17:11 danthegoodman1

hi @danthegoodman1 was able to reproduce it so thank you for that! a temp fix I have for now is that if you set the store: false in providerOptions for openai, you won't see the error. However, setting this parameter means that we essentially do not retain any context for this interaction for future interactions so be vary of that.

Meanwhile the team is still working to figure out the root cause and fix it once and for all. appreciate your patience :)

aayush-kapoor avatar Nov 14 '25 20:11 aayush-kapoor

Great news! For store: false, is that for openai not storing it? I’d prefer to locally manage my context anyway, I’m purely using the ai sdk as a way to easily swap providers, I otherwise need super low level control anyway

danthegoodman1 avatar Nov 14 '25 20:11 danthegoodman1

seems so, which is honestly what I thought was default and want anyway, so that works out perfectly for me. We've switched mostly to claude but still have a few openai ones. Will report back if we run into this again.

danthegoodman1 avatar Nov 14 '25 20:11 danthegoodman1

you can define that tag for anthropic models as well - let us know if you run into any other issues

aayush-kapoor avatar Nov 14 '25 21:11 aayush-kapoor

@aayush-kapoor where do you define this for anthropic models? I'm not seeing it at all under https://ai-sdk.dev/providers/ai-sdk-providers/anthropic or the index.d.ts for the provider

danthegoodman1 avatar Nov 17 '25 16:11 danthegoodman1

@danthegoodman1 you can define it in the providerOptions like this:

providerOptions: {
      anthropic: {
           store: false,
      }
}

define this parameter in the generate text function you’re defining

aayush-kapoor avatar Nov 18 '25 16:11 aayush-kapoor