anything-llm
anything-llm copied to clipboard
[UI/UX]: Continue generation button for limited context output
How are you running AnythingLLM?
Local development
What happened?
We all know that Deepseek R1 makes a reasoning portion of chat, which in the AnythingLLM abruptly ending unfinished text generation and there's no method in application to continue generation. Using spacebar in request STARTS NEW GENERATION and previous text context is lost completely. Oobabooga had Continue button for years, but they added a new recently on visible place specifically for Deepseek. This is clearly a bug of functionality.
Are there known steps to reproduce?
starting Deepseek R1 in any quant in 4096 or 8192 token context sizes.