Is it possible to show thinking/reasoning?
Is your feature request related to a problem? Please describe.
I recently heard about this project and thought to try it out with a locally run openai/gpt-oss-120b instance (llama-server, openai compatible api). Looking at what it can do, I noticed that the reasoning portions of the output don't seem to show in an $ interpreter session.
Describe the solution you'd like
Is there an already existing setting one could toggle, or is it perhaps planned to show thinking reasoning tokens?
Describe alternatives you've considered
- look more for an option I might enable
- use a different model, maybe non-thinking (though for my use cases I find gpt-oss to be pretty good so far)
- look into adding this feature, maybe as a PR
- look for other similar projects
- create a very simple thin wrapper based on
harmony
Additional context
No response
I think this was made before reasoning tokens were a thing and they're just not supported. When I've tried reasoning models from OpenRouter they just fail.
It would be nice to run gpt-oss locally
It would be a pain to implement in the terminal interface. It would either be always on or off.
I was thinking the reasoning tokens could be streamed into a sliding window and then replaced with "thought for n seconds" when complete? Or there could be an option to keep it visible as a block quote tag
That might work, but showing the whole thing with a toggle will cause more rendering issues
Well I mean there could be a command line setting or maybe a setting in %commands as to whether you want it to persist or not. But once its written or erased from the terminal you can't change it. But as for scrolling the raw tokens into a sliding window, I have that working already in my incremental rendering develop thing.