see ai's thinking intead of answers
Pre-submit Checks
- [x] I have searched Warp bugs and there are no duplicates
- [x] I have searched Warp known issues page and my issue is not there
- [x] I have included the logs (optional, but helps expedite the bug fix). Log gathering intructions
Describe the bug
Bug Report Title: Intermittent Issue: Warp AI Shows Only
Description:
Problem:
Occasionally, when interacting with the Warp AI assistant, the final response displayed in the terminal only contains the content from the AI's internal
Example Scenario: The AI might generate a response internally structured like: xml But the UI only displays: "My reasoning process goes here..." instead of a proper final answer or command suggestion.
Attempted Mitigation:
A custom rule (ID: EkspFCaxH8HOkY1wcNGRxV) has been implemented to address this. The rule's description is: "Ensures a visible final output if only a <thought> is present but no <action> or <final_answer> exists". It attempts to fix the issue by automatically copying the
Persistence: Despite this rule being active, the issue still occurs intermittently. This suggests that either the rule isn't triggered in all relevant cases, or there might be an underlying issue in how Warp processes or displays the final AI response after rules are applied.
Potential Context (Observation): The user has observed this behaviour and suspects it might occur more frequently with certain backend AI models (e.g., potentially Claude, though further investigation is needed to confirm model-specificity).
Expected Behavior:
Warp AI should always display a <final_answer> or
Request: Please investigate why the final answer/action might still be missing in some cases, even with the fallback rule (EkspFCaxH8HOkY1wcNGRxV) in place.
To reproduce
I think it occurs more with claude than other models, but I don't spend a lot of time trying to test this, as I have stuff to do.
sometimes it happens on any machine I'm on, I've wondered if it's rules or some wheird configuration, but hte ai doesn't think so.
I spent a lot of time a lot of time did I say alot, trying to explain this problem to get the answers, the best quick solution is having it write it's answers to files. which really gets in the way of flow.
Expected behavior
I expect to see "here is what I did....." what I see, "I'm going to to tell harry what I did in a comprehensive report." then I see nothing after that
Screenshots, videos, and logs
No response
Operating system (OS)
Linux
Operating system and version
linux mint
Shell Version
No response
Current Warp version
No response
Regression
No, this bug or issue has existed throughout my experience using Warp
Recent working Warp date
this has been occuring for at least three of your updates.
Additional context
No response
Does this block you from using Warp daily?
No
Is this an issue only in Warp?
Yes, I confirmed that this only happens in Warp, not other terminals.
Warp Internal (ignore): linear-label:b9d78064-c89e-4973-b153-5178a31ee54e
None
I need also
+1!!!
I know not disrupting the 'User Experience' is something Warp really cares about, so this could totally be a Developer Setting toggle or something — please think of all of us anxious folks by nature :P
Honestly, this is something almost all AI services with ‘Thinking mode’ have by default, and it’s not just there to look nice.
It also acts as a security feature, kind of like a workaround for the ‘black box problem’ that makes it hard for us to understand or explain how complex AI systems (especially DNNs) come to specific decisions or outputs.
But for me, it ALSO serves as a kind of '--verbose' feature so I can sanity check that there's actually some progress happening.
Now, if part of the 'Thinking...' is actually a placeholder for the network requests to LLMs and how long they take to start streaming tokens, then it's another story :P"
+1 The ability to copy the agents "thought process" under "thought for X seconds" would be very useful, particularly because the thinking models tend to think a lot, then act without any "output" block before their actions like is typical with 'non-thinking' models. Currently the "copy output" function does not copy any of the agent's thoughts.