gpt4all
gpt4all copied to clipboard
Allow "foreign" ggml models in chat client
Feature request
The models that can be downloaded with the chat client are ggml files, which can also be loaded with ggml's llama.cpp main program.
Motivation
I know that on llama.cpp very frequent "breaking changes" happen right now, which makes new models not run on older versions. Nevertheless, on huggingface there would be a large number of ggml models that work with different old versions of Llma.cpp.
Your contribution
No matter what models outside of the gpt4all downloadable models I put in the models directory, they are simply not displayed or loaded by the chat software. In the WebUI it doesn't behave like that, there I can use foreign ggml files without any problems. It is incomprehensible to me that there seems to be an artificial restriction here. Can't we just make all ggml files accessible in the models directory, if necessary on the user's own responsibility? The worst that could happen is probably just a program crash anyway.
You can do that just fine. Just make sure the model name is prefixed with "ggml-", and it's compatible with llama.cpp before May 12th.
See attached image:
@maddes8cht Standby for a PR in the next few days that addresses this. I'm working on changing up the model list handling and display. Here's a WIP of how we display the names for example.
Solved