gpt4all icon indicating copy to clipboard operation
gpt4all copied to clipboard

GPT-J models fail to load unless device is set to "CPU"

Open cebtenzzre opened this issue 1 month ago • 0 comments

Because current versions of GPT4All select a GPU backend before checking which llmodel implementations are supported by a given model file, we fail to load the model when the device is set to anything other than "CPU". I believe this worked before because both the llama.cpp/Kompute and GPT-J/CPU llmodel libraries were called "default", but this changed in #2310.

Source: https://discord.com/channels/1076964370942267462/1090427154141020190/1250569703928365079

cebtenzzre avatar Jun 13 '24 15:06 cebtenzzre