gpt4all
gpt4all copied to clipboard
GPT-J models fail to load unless device is set to "CPU"
Because current versions of GPT4All select a GPU backend before checking which llmodel implementations are supported by a given model file, we fail to load the model when the device is set to anything other than "CPU". I believe this worked before because both the llama.cpp/Kompute and GPT-J/CPU llmodel libraries were called "default", but this changed in #2310.
Source: https://discord.com/channels/1076964370942267462/1090427154141020190/1250569703928365079