AI_APICallError: tool_use blocks found without corresponding tool_result blocks
Description
Getting an AI_APICallError when tool_use blocks are not immediately followed by tool_result blocks.
Error Message
AI_APICallError: messages.69: `tool_use` ids were found without `tool_result` blocks immediately after: toolu_01X6jLG6TqrVeBTSEqq4ozEN. Each `tool_use` block must have a corresponding `tool_result` block in the next message.
Expected Behavior
The system should handle tool_use blocks properly and ensure each tool_use has a corresponding tool_result.
Additional Context
- This appears to be a message formatting/validation issue
- The error references a specific tool_use ID: toolu_01X6jLG6TqrVeBTSEqq4ozEN
- Message index 69 suggests this occurs in longer conversations
Steps to Reproduce
This error occurred during normal usage when the AI made tool calls but the tool results weren't properly matched.
I have experienced this too. For now you could do this: type /undo, then enter. It will resend your last prompt and it usually works. Sometimes I had to restart opencode and load the session, then undo the last prompt and continue. It worked for me, hopefully it works for you tool
can anyone share a session for this by chance?
I am not able to find the session for this one. If this error appears again, I will provide the session in this thread.
@rekram1-node this just happened to me on Opus 4.5, opencode version 1.0.134
Session log
Optimizing YouTube transcript retrieval
Session ID: ses_4fc89a65dffeXDBJYG337eRw9o Created: 12/9/2025, 3:14:06 PM Updated: 12/9/2025, 3:25:33 PM
User
At the end, I always use this library to get transcripts. Right now, we take the difficult route, being downloading the audio file, and then transcribing it. However for a lot of videos there is a transcript available directly from youtube, and then we can bypass the manual download entirely.
I found this library online: https://github.com/jdepoix/youtube-transcript-api
What I want the new flow to be when syncing a video:
- CHeck if there is a transcript available directly
- If not, take the original route. If there is, download transcript, and proceed
We can do these steps in prallel as well. E.g. we synchronously go through the video list, but when we need to actually download an mp3, we can do that in a different worker, and already check the next video for a direct transcript, then queue up MP3 downloads with the worker
This is a brief overview of what I want. Please think this through in depth, create a complete plan and spect to implement this, and write it down in a file
Assistant
I'll analyze this request thoroughly. Let me first understand the current implementation and then explore the youtube-transcript-api library to create a comprehensive plan.
Tool: task
Tool: webfetch
Assistant
I now have a good understanding of both the current codebase and the youtube-transcript-api library. Let me ask some clarifying questions before I create a detailed plan.
Clarifying Questions
Before I create the complete spec, I have a few questions to ensure the plan matches your expectations:
-
Transcript source preference: When a YouTube transcript is available, should we:
- Always prefer YouTube's transcript over MLX Whisper transcription?
- Allow user configuration (e.g.,
--prefer-youtube-transcriptflag)? - What if YouTube has auto-generated captions vs. manual captions - do you have a preference?
-
Language handling: The youtube-transcript-api supports multiple languages. Should we:
- Try to match the user's configured language preference?
- Fall back to English if the preferred language isn't available?
- Store language metadata in the database?
-
Transcript format consistency: YouTube transcripts have timestamps for each snippet. Should we:
- Keep the same output format as MLX Whisper (just text with metadata header)?
- Preserve timestamps in a different format?
- Store a flag indicating the transcript source (youtube vs whisper)?
-
Parallel worker architecture: You mentioned:
"we synchronously go through the video list, but when we need to actually download an mp3, we can do that in a different worker"
Currently downloads are sequential (main thread) and transcriptions are concurrent (worker thread). Are you proposing:
- Option A: Keep downloads sequential, but check YouTube transcripts first (fast), then queue MP3 downloads only when needed
- Option B: Add a separate download worker pool for parallel MP3 downloads
- Option C: Something else?
-
Error handling: If YouTube transcript fetch fails (e.g., video has no transcript, IP blocked), should we:
- Silently fall back to download+transcribe?
- Log a warning and fall back?
- Retry with different language codes?
-
Database schema: Do you want to track:
- Which videos have YouTube transcripts available?
- Which source was used for each transcript (youtube_api vs whisper)?
- Transcript language?
Let me know your thoughts on these questions and I'll create a comprehensive implementation plan.
User
- Yes, always prefer from the API, both auto generated or manual
- always english, I don't care about other languages
- Don't care about timestamps. Just text like we get with whisper
- You can architect it, but I think: 1 worker to check transcript API (fast), if not, queue and send off to 1 download worker, then we have the transcript worker after that
- If IP blocked, please log it out to console as a nice warning, and fallback to download. No transcript available is expected behavior and then we fall back to mp3
- If currently we track anything about videos in e.g. database somewhere, then sure you can add that. Otherwise don't bother
Assistant
Thank you for the clarifications. Now I have everything I need to create a comprehensive plan. Let me write this to a spec file.
Tool: read
Tool: read
Tool: read
Assistant
Now I have a complete understanding of the codebase. Let me also check the config and transcriber for any relevant details:
Tool: read
Tool: read
Assistant
Now I have all the information I need. Let me create a comprehensive spec document:
Tool: write
Now I have all the context I need. Let me write a comprehensive specification document:
Tool: write
Assistant
I don't want to undo the last whole message as I think that would delete the result, but having to start a whole new conversation is also annoying
@rekram1-node Hey, what's the priority on this? It is a single most annoying bug that can kill the long running session randomly without a chance to recover or summarize to handover to another session. And it happens to me regularily enough to be a pain
@Mishkun if you can gimme a opencode export > session.json I can fix it
But I need the actual session data to figure out where it went wrong
session-ses_4d6d.md session.json
@rekram1-node not OP but here is a session md and json from me hitting this in version *.164
okay on my list to checkout here
#1662 related