anything-llm
anything-llm copied to clipboard
[BUG]: Sometimes the thought process cannot be fully rendered when use deepseek R1 model
How are you running AnythingLLM?
AnythingLLM desktop app
What happened?
Sometimes the thought process cannot be fully rendered:
in this picture,the thinking process is interrupted by rendering part of it
Are there known steps to reproduce?
use deepseek ai ,then choose deepseek-reasoner model
The problem of not being able to fully render the thought process is even more obvious in this image