localCopilot icon indicating copy to clipboard operation
localCopilot copied to clipboard

Ollama support

Open shouryan01 opened this issue 1 year ago • 6 comments

Ollama is a very popular backend for running local models with a large library of supported models. It would be great to see ollama support

shouryan01 avatar Jan 09 '24 17:01 shouryan01

Does it expose an openai endpoint? If it does, then support should be simple

FarisHijazi avatar Jan 09 '24 19:01 FarisHijazi

It doesn't, however can make use of LiteLLM to wrap the endpoint to make it openai compatible

shouryan01 avatar Jan 16 '24 01:01 shouryan01

good idea, will work on it soon

been super busy last few months, I'd appreciate if you could try things out and lemme know if you face any issues, it should be simple like just pointing the middleware to the Ollama/litellm openai port

FarisHijazi avatar Feb 03 '24 14:02 FarisHijazi

It doesn't, however can make use of LiteLLM to wrap the endpoint to make it openai compatible

So it's possible to run ollama in docker, and let's say it exposes the usual localhost:11434 port, then using LiteLLM you can convert that port exposure to a OpenAI endpoint ? Or do you need to run the Ollama model right from LiteLLM?

Mayorc1978 avatar Feb 04 '24 09:02 Mayorc1978

It doesn't, however can make use of LiteLLM to wrap the endpoint to make it openai compatible

So it's possible to run ollama in docker, and let's say it exposes the usual localhost:11434 port, then using LiteLLM you can convert that port exposure to a OpenAI endpoint ? Or do you need to run the Ollama model right from LiteLLM?

Looks like LiteLLM provides a docker image

shouryan01 avatar Feb 07 '24 17:02 shouryan01

I'll try to get on it soon after fixing the authentication

FarisHijazi avatar Jun 26 '24 06:06 FarisHijazi