jan icon indicating copy to clipboard operation
jan copied to clipboard

epic: Improving Ollama integration

Open eckartal opened this issue 1 year ago • 17 comments

Problem Integrating Ollama with Jan using the single OpenAI endpoint feels challenging. It’s also a hassle to ‘download’ the model.

Success Criteria

  • Make it easier to add Ollama endpoints.
  • Automatically find available Ollama models and settings.
  • Allow multiple Ollama instances (e.g., local for small models, server/cloud for models).

Additional context Related Reddit comment to be updated: https://www.reddit.com/r/LocalLLaMA/comments/1d8n9wr/comment/l77ifd1/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button

eckartal avatar Jun 05 '24 16:06 eckartal

Yes please! This is my biggest blocker with Jan. I don't want multiple redundant model file location. I'd like my ollama models to be easily used

ShravanSunder avatar Jun 29 '24 18:06 ShravanSunder

I second this. I looked at the docs about "Ollama integration", but all that does is set up the server endpoint. You can't select an Ollama model already downloaded where Ollama stores its models and I don't think you can upload the model. On my openSUSE Tumbleweed system, Ollama stores its models in /var/lib/ollama/.ollama/models/ rather than the default Ollama location, and the Import file selection dialog can't even see the directories below the /var/lib/ollama directory.

richardstevenhack avatar Jul 12 '24 06:07 richardstevenhack

You can import already downloaded models (locally symlink) directly from the Hub.

image image

We likely won't support a direct integration for a while as we already integration with Hugging Face

freelerobot avatar Sep 05 '24 08:09 freelerobot

You shouldn't have to import a model though, if you look at how other tools do it, they provide a list of available models from the api

sammcj avatar Sep 05 '24 08:09 sammcj

That's right, a call to the model listing endpoint and then allowing one to be selected for use is what we're talking about, at least to start. I think 0xSage is a little mixed up about what we're all asking for, maybe? This is holding back anyone that has Ollama running somewhere with modelfiles already configured that they want to use.

sean-public avatar Sep 05 '24 08:09 sean-public

Ah I see what you guys are saying, we can do a dropdown option here, pinging your existing running server: image

freelerobot avatar Sep 05 '24 08:09 freelerobot

Yeah! That’d be fantastic!

sammcj avatar Sep 05 '24 09:09 sammcj

Yup, that's the idea.

richardstevenhack avatar Sep 05 '24 18:09 richardstevenhack

yes please

ShravanSunder avatar Sep 10 '24 15:09 ShravanSunder

~my local models downloaded via ollama are curious to test Jan 👀~ I was able to symlink ollama models to Jan using https://github.com/sammcj/gollama

mrtysn avatar Sep 17 '24 11:09 mrtysn

" I was able to symlink ollama models to Jan using https://github.com/sammcj/gollama"

I use gollama to link to LMStudio. How did you use it to link to Jan? Did you put the Jan directory into the LMStudio files path in Gollama?

richardstevenhack avatar Sep 17 '24 15:09 richardstevenhack

@richardstevenhack I probably could update Gollama to add Jan linking support but I think it would make more sense for Jan to just support Ollama as LLM provider though, that way you'd get all the nice Ollama API features and wouldn't have to load models in multiple places.

sammcj avatar Sep 17 '24 21:09 sammcj

I agree. If everyone rallied around Ollama as the main AI server for PCs, and other programs concentrated on the UI and additional features on top, things would be easier. Until then, would be nice to have the ability to link Jan to Ollama.

richardstevenhack avatar Sep 18 '24 00:09 richardstevenhack

Would also love to see Ollama models from a dropdown in Jan via a model provider! Running models on both Ollama and Jan simultaneously can bring most computers to their knees!

khromov avatar Sep 20 '24 00:09 khromov

" I was able to symlink ollama models to Jan using https://github.com/sammcj/gollama"

I use gollama to link to LMStudio. How did you use it to link to Jan? Did you put the Jan directory into the LMStudio files path in Gollama?

Should have included the steps in the original comment:

  1. Install ollama and have at least one model downloaded.
  2. Install LMStudio (I assume this step is optional, you should be able to manually create the default folder path for the blobs and skip this). The default folder path is: ~/.cache/lm-studio/models.
  3. Install gollama and make it create symlinks in the default path: gollama -L. Again, you can skip (2) if gollama adds/has support for custom folder paths.
  4. Install Jan and manually import the GGUF file. e.g. ~/.cache/lm-studio/models/llama3.1/llama3.1-latest-GGUF/llama3.1-latest.gguf.

Feedback:

  • It would be good to be able to specify a folder path so you don't have to manually add every model.
  • It would be good if Jan scanned such folders by default.
  • It would be great if no symlinking was necessary and Jan could already see the ollama installation.

mrtysn avatar Sep 20 '24 04:09 mrtysn

I followed the above advice and I now see that Jan has added the option when importing to: Keep Original Files & Symlink You maintain your model files outside of Jan. Keeping your files where they are, and Jan will create a smart link to them. Very nice!

richardstevenhack avatar Sep 20 '24 05:09 richardstevenhack

Very nice. Just an observation: it worked for me when I selected the folder to be imported (containing a GGUF file). When I tried to select the symlink to the actual model file I had an error saying that only GGUF files are supported.

abdessalaam avatar Sep 25 '24 05:09 abdessalaam

+1

s-celles avatar Jan 10 '25 15:01 s-celles

  • Closed in favor of https://github.com/janhq/jan/issues/3786
  • After the 0.5.14 release (10 Feb 2025), users will be able to add custom remote providers, including Ollama: https://jan.ai/docs/install-engines

cc @s-celles @sammcj @khromov @mrtysn @ShravanSunder @abdessalaam @richardstevenhack

ux-han avatar Jan 20 '25 16:01 ux-han

I'm having issues connecting to my local ollama server. all credentials should be right, open webui works flawlessly

also tried /v1/

Image

clicktodev avatar Sep 01 '25 12:09 clicktodev