lluminous icon indicating copy to clipboard operation
lluminous copied to clipboard

How to connect Lluminous to local llama.cpp?

Open vojtapolasek opened this issue 1 year ago • 1 comments

Hello, I really like your app, it looks great! However, I am lacking information about using local models through llama.cpp. I am using the packaged client + server on Linux. I use the "--llama" parameter to give path to my llama.cpp repository with compiled llama-server etc... but I don't see models. How should I use this please? Thank you.

vojtapolasek avatar Aug 08 '24 06:08 vojtapolasek

The directory where lluminous looks for models is hardcoded as "models" inside the llama.cpp directory you pass through the --lama parameter, so you'll need to create it and move your models inside there.

image

It's worth noting though, that local model support currently only works with models that use the ChatML template format. The feature I'm currently working on is adding support for Ollama, which means it's going to work with any model, and hopefully with even less hassle. Sorry for the inconvenience!

zakkor avatar Aug 17 '24 10:08 zakkor

@vojtapolasek there is now Ollama support, just start the Ollama server and you should see your installed models in the model picker.

zakkor avatar Jan 31 '25 16:01 zakkor