opencode icon indicating copy to clipboard operation
opencode copied to clipboard

Feature: Mid-stream context injection at natural breaks (like Claude Code)

Open sudoxreboot opened this issue 3 weeks ago • 4 comments

Feature Request

Current Behavior

When typing while the AI is responding, messages are queued and only processed after the current response completes. This means if I remember something important mid-response, I have to wait for completion or cancel entirely.

Desired Behavior

Allow queued user input to be injected at natural breaks during response generation, similar to how Claude Code handles this:

  1. User types new input while AI is responding
  2. System detects a "natural break" (after tool result, end of text block, between reasoning steps)
  3. User's new input is appended to the conversation context
  4. AI continues but now sees the new info and can incorporate it

Why This Matters

  • Sometimes you forget context and remember mid-response
  • Canceling loses progress and costs tokens
  • Current queue waits until completion, missing the chance to influence the current response

Implementation Notes

The agent loop would need to:

  1. Check for pending queued input at safe breakpoints (after tool calls complete, after text blocks)
  2. If input exists, append it to the conversation before continuing
  3. Let the model naturally incorporate the new context in its next step

This is different from interrupt (which breaks tool_use/tool_result pairing) - it's a graceful context merge at safe points.

Reference

Claude Code does this - you can type mid-response and it finds a natural break to pop in your new message before continuing.

sudoxreboot avatar Dec 05 '25 15:12 sudoxreboot

This issue might be a duplicate of existing issues. Please check:

  • #4821: Add ability to unqueue messages (related queue management)
  • #4822: RFC: add an "interject" answer to the "ask" permissions responses (similar goal of injecting user input during AI processing)
  • #3378: Silent Message Insertion API (directly addresses injecting context without interrupting current response)
  • #4156: Plan/build for queued prompts are unintuitive (related to queue behavior)

Feel free to ignore if none of these address your specific case.

github-actions[bot] avatar Dec 05 '25 15:12 github-actions[bot]

I think that would be pretty neat. I'm gonna take a look and see what might be feasible

will-marella avatar Dec 05 '25 18:12 will-marella

I think that would be pretty neat. I'm gonna take a look and see what might be feasible

It's very feasible. Working on implementing this.

will-marella avatar Dec 05 '25 21:12 will-marella

I think that would be pretty neat. I'm gonna take a look and see what might be feasible

It's very feasible. Working on implementing this.

I was experiencing this earlier but can't reproduce now. It seems to be working fine. Anyone else still seeing it?

will-marella avatar Dec 06 '25 00:12 will-marella