feat: improve session loop for reasoning models
Improve the session prompt loop to properly support AI models that separate their reasoning process from final responses, with specific focus on interleaved thinking patterns where reasoning, tool calls, and text are mixed.
- Changed lastAssistant type from MessageV2.Assistant to MessageV2.WithParts to access full message structure including parts
- Added detection for reasoning blocks vs text content in assistant messages
- Enhanced loop exit conditions to distinguish between reasoning-only vs reasoning+text/tool content for interleaved thinking models
- Added comprehensive comments explaining complex exit conditions
- Supports separated thinking, interleaved thinking, and mixed tool call patterns
- Prevents premature loop termination for models with mixed content types
This enables proper handling of interleaved thinking models like MiniMax M2 that may output reasoning blocks interspersed with tool calls and text content, ensuring the loop continues until a complete response with actual text is available.
I meet the similar issue with official MiniMax API: https://github.com/sst/opencode/issues/4112, the session stopped randomly after a thinking block or tool call.
Seems fixed with this patch
@no1wudi any reason why you closed this PR? it seems to work for me
@hicdercpl I close it since I meet another issue that the session will stopped on parallel tool calls sometimes with this model, I'm not sure this patch is proper way to fix this issue.
@rekram1-node Could you take a look?
/review
So what's ur new exit condition exactly
So what's ur new exit condition exactly
For this patch, I have made the following two modifications:
-
Additionally included a check for whether any tool call is in a pending state. I'm not entirely sure if this is necessary.
-
Added an extra check in the response to ensure that the loop continues as long as the model's response contains a tool call.
Regarding point 2, the issue I encountered is that the loop often exits prematurely when the model's thinking process and tool calls are interleaved. It seems like the Minimax API incorrectly returns a finish signal in the middle of this process? The Kimi thinking model exhibits similar behavior, but it doesn't exit the loop like this.
By this approach, I get the loop run but not sure why I get the status indicator in the middle of work:
The loop exited here if without this patch.
@no1wudi what provider are u seeing it with is it specifically w/ minimax from minimax provider? And is it through the anthropic api endpoint or through openai "compat"? Or is it just using defaults for the provider from models.dev
I wanna get this fixed but need to understand full scope of issue because i think there are issues w/ all interleavened thinking models but I thought if they sent it in anthropic format everything worked fine.
I wanna get this fixed but need to understand full scope of issue because i think there are issues w/ all interleavened thinking models but I thought if they sent it in anthropic format everything worked fine.
@rekram1-node I use the default setting from models.dev with minimax-cn provider
@no1wudi I discussed w/ minimax, this is a bug on their end they said they will fix it.