anything-llm
anything-llm copied to clipboard
[BUG]: Vision models don't retain memory of images past one prompt
How are you running AnythingLLM?
AnythingLLM desktop app
What happened?
When I upload a file, I can use a vision model like llama3.2-vision:11b to describe it, but then subsequent prompts don't have any memory of the image.
I would expect that I can ask repeated questions of the image and that it would remain in my current context until my context window was exhausted.
Are there known steps to reproduce?
No response