opencode
opencode copied to clipboard
[FEATURE]: make `opencode run` stream in token by token
Feature hasn't been suggested before.
- [x] I have verified this feature I'm about to request hasn't been suggested before.
Describe the enhancement you want to request
In the interactive mode the responses are streamed token-by-token, but in the headless/non-interactive mode (opencode run) they don't seem to be. It would be very helpful, especially with local models, to see the responses in real time, instead of all at once after a few minutes.