gpt4all icon indicating copy to clipboard operation
gpt4all copied to clipboard

Allow "foreign" ggml models in chat client

Open maddes8cht opened this issue 1 year ago • 2 comments

Feature request

The models that can be downloaded with the chat client are ggml files, which can also be loaded with ggml's llama.cpp main program.

Motivation

I know that on llama.cpp very frequent "breaking changes" happen right now, which makes new models not run on older versions. Nevertheless, on huggingface there would be a large number of ggml models that work with different old versions of Llma.cpp.

Your contribution

No matter what models outside of the gpt4all downloadable models I put in the models directory, they are simply not displayed or loaded by the chat software. In the WebUI it doesn't behave like that, there I can use foreign ggml files without any problems. It is incomprehensible to me that there seems to be an artificial restriction here. Can't we just make all ggml files accessible in the models directory, if necessary on the user's own responsibility? The worst that could happen is probably just a program crash anyway.

maddes8cht avatar May 22 '23 17:05 maddes8cht

You can do that just fine. Just make sure the model name is prefixed with "ggml-", and it's compatible with llama.cpp before May 12th.

See attached image: image

TechnoByteJS avatar May 23 '23 13:05 TechnoByteJS

@maddes8cht Standby for a PR in the next few days that addresses this. I'm working on changing up the model list handling and display. Here's a WIP of how we display the names for example. image

jstayco avatar May 27 '23 02:05 jstayco

Solved

niansa avatar Aug 11 '23 13:08 niansa