[FEAT]: Support multiple providers at once for workspaces
What would you like to see?
It would be great if AnythingLLM allowed the admin to configure URLs and API keys for multiple LLM providers. For example, I would love it if one workspace used OpenAI and another workspace used a Local AI endpoint for something like Mixtral 8x 7B. It seems that for any given AnythingLLM instance, there is only one preferred LLM provider and that setting applies across all workspaces.
Thank you!
Was also looking into this. For my business use case, I am using ollama as the backend and I have an ollama instance running a mistral instance and another using codellama. I would love to have a workspace dedicated to our RAG chat bot and another for code generation.
@icsy7867 Are the Ollama models running on the same server? If so, you can do that already via the workspace's settings. If the model is available, you can override the system model.
This issue is more towards connecting a totally different provider. Unless you mean you have Ollama running somewhere and the Mistral API service is another thing you want to connect, not just Mistral running on Ollama.
@icsy7867 Are the Ollama models running on the same server? If so, you can do that already via the workspace's settings. If the model is available, you can override the system model.
This issue is more towards connecting a totally different provider. Unless you mean you have Ollama running somewhere and the Mistral API service is another thing you want to connect, not just Mistral running on Ollama.
I wasn't under the impression that an ollama instance could run multiple model simultaneosly. Currently I am running two separate docker containers. Will give it a whirl!
You can! You would just use the ollama serve command and it will run an openAI compatible API at http://127.0.0.1:11434. Plug that into AnythingLLM and you can list what models you want to use. I believe Ollama hot loads them - obviously more models = more RAM
Yeah, in that case ollama is a single provider of multiple models, which is great. But just to be clear, I'm looking for is the ability to configure per-workspace LLM providers so I can associate entirely different providers (OpenAI, Anthropic, ollama, etc.) with specific workspaces within a single AnythingLLM instance. I don't believe that AnythingLLM supports that yet? Currently I'd have to spin up a different instance of AnythingLLM for each provider.
@sheneman Absolutely, im clear on the scope and detail of this issue being distinct providers and not just using a different model from the singular system provider which is currently live.
We are on the same page for this issue - I was just clarifying!
@timothycarambat - Awesome. Anything LLM is absolutely wonderful by the way!
A the very least I would like to be able to assign different Token context window values for each Chat Model selected. Thank you.
Related https://github.com/Mintplex-Labs/anything-llm/issues/969