feat: Custom model location for Llama.cpp engine
Problem Statement
I'am using Hugging Face models via different applications: LM Studio, AnythingLLM, Jan,.. It is bad idea to download same models each time, so I store them in one place ~/Documents/HuggingFace to make them reusable.
Write now Jan does not allow to change its models folder location as LM Studio does.
Feature Idea
Add a setting in Llama.cpp section to change default model location and allow to user to specify it if needed.
hi @olegshulyakov I want to clarify few things:
- Jan does support importing models from a different locations (via either copy the model or add a symlink to the model)
- different application uses different folder structure to store models (from different providers and authors).
@david-menloai Aggred, but since most models are downloaded from HF re-usable folder structure is straight-forward HuggingFace/<author>/<model>/<files>
Hi @olegshulyakov, that's a great idea to change the model location, @qnixsynapse is working on a new llama.cpp extension that allows users to set the model folder location from there.
Currently, import() in the new llamacpp extension does similar, the model file stays in its original location. We can introduce recursive 'model folder' import() to achieve similar to what has been described in the ticket.