localCopilot
localCopilot copied to clipboard
Ollama support
Ollama is a very popular backend for running local models with a large library of supported models. It would be great to see ollama support
Does it expose an openai endpoint? If it does, then support should be simple
It doesn't, however can make use of LiteLLM to wrap the endpoint to make it openai compatible
good idea, will work on it soon
been super busy last few months, I'd appreciate if you could try things out and lemme know if you face any issues, it should be simple like just pointing the middleware to the Ollama/litellm openai port
It doesn't, however can make use of LiteLLM to wrap the endpoint to make it openai compatible
So it's possible to run ollama in docker, and let's say it exposes the usual localhost:11434 port, then using LiteLLM you can convert that port exposure to a OpenAI endpoint ? Or do you need to run the Ollama model right from LiteLLM?
It doesn't, however can make use of LiteLLM to wrap the endpoint to make it openai compatible
So it's possible to run ollama in docker, and let's say it exposes the usual localhost:11434 port, then using LiteLLM you can convert that port exposure to a OpenAI endpoint ? Or do you need to run the Ollama model right from LiteLLM?
I'll try to get on it soon after fixing the authentication