cody
cody copied to clipboard
Unexpected behavior after stop output earlier
Version
1.80.1
Describe the bug
After terminating LLM output via the stop button, attempting to input a new message in the lower input field and clicking send results in lost of new input content, previous partially generated output, and the previous message's prompt appearing in the input field.
Reproduction Steps:
- Just greet to Cody
- Ask it to generate sufficiently long content allowing you to manually stop the output in time
- Click the stop button below to halt output during generation
- Assume you regret it, but still need the AI to know what it just said (e.g., it made false assume)
- Thus you must input a new message in the bottom input field
- Click send
- Observe: Partially generated messages output, newly input content gone, and the input field reverts to the prompt of the previously terminated message's prompt
Refer to the attached
https://github.com/user-attachments/assets/68c8b084-06c2-4d3e-9cc8-55058b383011
for demonstration
Expected behavior
The system should retain the LLM output that was previously manually terminated (optionally noting at the end that it was user-terminated) and properly send new messages. This behavior should be consistent with how it functions on other popular AI platforms like ChatGPT.
Additional context
No response