[BUG]: I set different models for Ollama per workspace, but the same one is used even after switching.
How are you running AnythingLLM?
AnythingLLM desktop app
What happened?
2 issues:
1 Whichever model was the last setup in a workspace is now set for all workspaces. 2 Even after setting a new model, whichever model was previously loaded is still used.
Expected behaviour:
- Each workspace will keep the model assigned. If models can only be assigned globally, then the setting should be a global config.
- When changed to a different model, the current setting should be used (like open-webui does).
Great software and I'm looking forward to more great things. Love how easy it is to use RAG.
Are there known steps to reproduce?
- I set up a workspace to use llama3.3:70b on Ollama and it works perfectly.
- I added a second workspace to use Deepseek-r1:70b.
I expected the first workspace to still be have llama3 in its config but it now had Deepseek. When I started using the second workspace, it continued to use the (already loaded) llama3 model.
同样的问题,体验不是很好,本以为比另一个项目要强,有点希望
Closing as after trying this any number of ways I cannot replicate this. Ollama shows loading of new model as expected.
When global Ollama llm is set and workspace has nothing defined - all chats are the global LLM.
When setting LLM per workspace the changes are persisted in each workspace and chats cause the loading of an appropriate model for the respective workspace based on its setting.
Some items that would help show this:
- Video of global setting, going to workpsace to show it's chat model, and then sending a chat showing the global model was used.
- Showing that saving your workspace llm in workspace 1 changed the model in workspace 2
I think that there's a misunderstanding of the bug (my fault). I'm using Ollama as the system LLM provider. When I configure the default I select a model, say llama3.3. Going to a workspace config, my only choices are to change the entire backend provider, not the provider and the model together. So for instance I want to use a workspace with ollama:llama3.3 and another with ollama:DeepSeek-r1. I don't see how this is possible. Attached is a screenshot of the selections in the "Workspace LLM Provider" drop down:
Scroll down in that view (it is scrollable) you can also type "Ollama" in the input and click on Ollama and it will use your system's Ollama connection, but allow you to specify model - exactly as you want.
Example (System LLM screen showing Ollama connection)
Workspace Chat settings (ollama selected by searching "ollama" or scrolling down")
Can see model dropdown once selected - I only have a single LLM installed, but any available in ollama will be listed here.
Don't I feel like a dolt. I didn't see scroll bars so I didn't even try to scroll.
Sorry and thanks. Great project.
---- On Tue, 25 Feb 2025 17:42:09 -0500 Timothy Carambat @.***> wrote ---
timothycarambat left a comment https://github.com/Mintplex-Labs/anything-llm/issues/3323#issuecomment-2683457415
Scroll down in that view (it is scrollable) you can also type "Ollama" in the input and click on Ollama and it will use your system's Ollama connection, but allow you to specify model - exactly as you want.
Example (System LLM screen showing Ollama connection) https://github.com/user-attachments/assets/06b44da2-1104-47ff-b743-ec28df2411a7 Workspace Chat settings (ollama selected by searching "ollama" or scrolling down") https://github.com/user-attachments/assets/9616c022-3838-4e9f-9df0-15c7b26a65f6 Can see model dropdown once selected - I only have a single LLM installed, but any available in ollama will be listed here.
https://github.com/user-attachments/assets/b9698798-6a48-4bfd-85e8-73018b4e39be https://github.com/user-attachments/assets/84863b5d-24dd-4fb5-abdc-1b88743fedf6 — Reply to this email directly, https://github.com/Mintplex-Labs/anything-llm/issues/3323#issuecomment-2683457415, or https://github.com/notifications/unsubscribe-auth/AAJ4HA2GDZRJYFS6OHNSCYT2RTWUDAVCNFSM6AAAAABXVKGIF2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDMOBTGQ2TONBRGU. You are receiving this because you authored the thread.
@rabinnh If it was non-obvious then we should try to make it so. You are not the first to have this happen to them so clearly we need some work on UX on that component!