gradio
gradio copied to clipboard
Option for collapsing `<think>...</think>` inside some `<details>` tag in the `ChatInterface` in the default `load_chat` (to not require extra-coding for this very wide-spread micro-format)
It is useful, as with extremely long reasoning outputs, browser scrollbar becomes quite useless
It would also be useful to have a side panel with the messages list to allow to navigate between LLM model outputs and user inputs
The UI supports this but it's not done by default. You need to put the thinking block in a separate message with metadata: {'title': "<your-title>"}
See for example demo/chatbot_thoughts or the smolagents UI:
I think given the wide-spreadness of <think> and </think> micro-format, it would be very nice to extend the default load_chat to provide this option (collapsing contents of think contents) directly without extra coding
It might also be nice to create a CLI version of load_chat:
- https://github.com/gradio-app/gradio/issues/11334
I.e. I propose to extend the OpenAI-talking client in external.py/load_chat to optionally also use this technique and do this thinking message extraction as in https://github.com/gradio-app/gradio/blob/477730ef51697a355a09020b235f6cc4a6fbb9dc/demo/chatbot_thoughts/run.py#L30-L39
I think it's reasonable for us to consider supporting this by default in gr.load_chat -- I'll reopen @freddyaboulton @aliabid94