opencode icon indicating copy to clipboard operation
opencode copied to clipboard

feat(provider): add interleaved thinking support for models

Open DanielusG opened this issue 3 weeks ago • 8 comments

Summary

  • Add interleaved_thinking field to ModelsDev Model schema to detect models with interleaved thinking capability
  • Add interleavedThinking capability to provider Model interface for internal representation
  • Update transform logic to handle the new field mapping with proper default values
  • Add comprehensive test coverage for interleaved thinking transformation

What is Interleaved Thinking?

Interleaved thinking is a reasoning approach where large language models alternate between thinking and action/answering steps, rather than following the traditional "think-then-answer" pattern. Instead of generating a long chain of thought followed by a single response, models using interleaved thinking follow a pattern like:

Reason → Tool Call → Observe → Reason → Tool Call → ...

Key Benefits:

  1. Reduced Latency: Cuts time-to-first-token (TTFT) by over 80% on average compared to traditional chain-of-thought reasoning
  2. Dynamic Adaptation: Allows models to adjust their strategy based on intermediate results and tool outputs
  3. Error Reduction: Enables immediate checking of reasoning steps, reducing error propagation in long chains
  4. Enhanced Transparency: Provides inspectable multi-step thinking through reasoning_details structures
  5. Better Performance: Shows up to 19.3% improvement in accuracy on complex reasoning tasks

Research & Sources

This implementation is based on current research and industry developments:

Technical Changes

  • ModelsDev Schema: Added optional interleaved_thinking boolean field to detect model capability
  • Provider Interface: Added optional interleavedThinking boolean to Model capabilities
  • Transform Logic: Updated transformation functions to map between schemas with proper defaults
  • Backward Compatibility: Made field optional to ensure existing models continue to work
  • Test Coverage: Added tests to verify proper transformation and default handling

Applications

This capability transforms traditional function-calling into agent-level tool use, making it particularly valuable for:

  • Complex multi-hop question answering
  • Mathematical reasoning
  • Logical deduction
  • Tool-assisted problem solving

Testing

All existing tests pass, and new test coverage has been added for the interleaved thinking transformation logic. The changes maintain full backward compatibility with existing model configurations.

DanielusG avatar Dec 07 '25 13:12 DanielusG

I think the correct fix is adding the reasoning_details support to the openai compatible provider. We should track the interleveaned thinking boolean per model tho but that should first be done on models.dev

I am going to add interleaved thinking support to our custom ai sdk provider

rekram1-node avatar Dec 08 '25 16:12 rekram1-node

I think the correct fix is adding the reasoning_details support to the openai compatible provider. We should track the interleveaned thinking boolean per model tho but that should first be done on models.dev

I am going to add interleaved thinking support to our custom ai sdk provider

I tried using the reasoning_details parameter, but it didn't work for many providers; for example, LiteLLM doesn't work, nor does VertexAI (for API kimi and minimax). Instead, I tried passing the reasoning via content, and GPT OSS magically became more competent—it was like night and day for simple local tasks. MiniMax and Kimi also had the same result; before, in their reasoning, they constantly showed "The user asked me....", whereas now, for subsequent messages, they respond to the tool.

DanielusG avatar Dec 08 '25 16:12 DanielusG

Ah okay that's a good point. Hm okay I'll do some more research and we will talk internally about this problem in a few hrs. I do see why this fix works, it does feel a bit like a hack but very thankful for you bringing this to my attention i will keep u posted

rekram1-node avatar Dec 08 '25 17:12 rekram1-node

Instead, I tried passing the reasoning via content, and GPT OSS magically became more competent

How did you do this? I am seeing the exact same thing that you are reporting: Eg, each reasoning message starts with "The user asks me..." instead of the model continuing where it left off.

Mushoz avatar Dec 09 '25 19:12 Mushoz

Hi @rekram1-node I’ve seen the PR about “better interleaved thinking” (#5298 ) but I can confirm that it still doesn’t work on LiteLLM Proxy. Since I use many models from different providers, the only appropriate way for me to manage the situation and track costs is by using the litellm proxy; the problem is not limited to litellm alone, even querying llama.cpp directly, the reasoning is not passed back to the model. In practice it seems that to ensure greater compatibility it would be better to include, in addition to the reasoning_content and reasoning_details fields, the content field. Let me know what you think.

DanielusG avatar Dec 10 '25 09:12 DanielusG

there were 2 different interleaved thinking prs, what format does litellm expect?

can u not define this in your opencode.json? we can add more mapping options but if all your models are being defined by u you should be able to specify which data to send back

rekram1-node avatar Dec 13 '25 05:12 rekram1-node

there were 2 different interleaved thinking prs, what format does litellm expect?

can u not define this in your opencode.json? we can add more mapping options but if all your models are being defined by u you should be able to specify which data to send back

It seems that nowadays there is no standard that all providers adhere to for interleaved thinking support, so everyone implements whatever version they like, and others don't even implement it at all.

That is why, in my opinion, it would be truly useful if OpenCode (and generally any LLM client) offered a certain degree of provider customization.

So, in the specific case of models on LiteLLM, it seems you have to pass it using content, but in others that support the OpenAI schema, you need to use specific fields like reasoning_content and reasoning_details though.

I saw merged PR #5207; could this be useful in any way for creating provider-specific plugins without messing up the configuration?

DanielusG avatar Dec 13 '25 07:12 DanielusG

We can add/expand the interleaved thinking configuration supports, but i don't think we should be converting all reasoning chunks to text parts, if there is a specific provider that requires it then maybe but so far all the providers that'd want it that way (that I've seen) will already send the reasoning chunks back as assistant messages with the ... tags.

rekram1-node avatar Dec 13 '25 18:12 rekram1-node

@rekram1-node I can confirm that the support for interleaving thinking with the parameter you implemented works by specifying the field reasoning_details or reasoning_content. My issue was that I was simply passing interleaved: true and it didn’t always work with all models; instead, specifying the field works even with litellm and other providers. For me, I can close this PR because a clearly better version has been implemented and mine was just a workaround. Perhaps simply updating the documentation about it could be marginally useful.

DanielusG avatar Dec 16 '25 16:12 DanielusG

Sweet

rekram1-node avatar Dec 16 '25 17:12 rekram1-node