gpt4all icon indicating copy to clipboard operation
gpt4all copied to clipboard

macOS 10.13.6 (High Sierra): dyld: cannot load 'gpt4all-lora-quantized-OSX-intel'

Open guojing0 opened this issue 2 years ago • 4 comments
trafficstars

I am following the instruction on README, and got the following result after running the command. I also put gpt4all-lora-quantized.bin in the chat folder.

I am on an Intel-based Macbook Pro (Early 2011) and am on High Sierra (10.13.6).

Thank you.

➜  chat git:(main) ✗ ./gpt4all-lora-quantized-OSX-intel
dyld: cannot load 'gpt4all-lora-quantized-OSX-intel' (load command 0x80000034 is unknown)
[1]    9129 abort      ./gpt4all-lora-quantized-OSX-intel

Edit: I am following @tectiv3 suggestion from here and got the following result:

➜  chat git:(main) ✗ ./gpt4all-lora-quantized-OSX-intel-old
main: seed = 1680460835
llama_model_load: loading model from 'gpt4all-lora-quantized.bin' - please wait ...
llama_model_load: ggml ctx size = 6065.35 MB
[1]    10038 illegal hardware instruction  ./gpt4all-lora-quantized-OSX-intel-old

guojing0 avatar Apr 02 '23 18:04 guojing0

The reason GPT4ALL is not running in your environment is because your machine's CPU is too old for the built binaries. Your MacBook Pro Early 2011 has a Sandy Bridge architecture CPU, which does not support AVX2 and FMA. Therefore, you need to create an Xcode project with the LLAMA_NO_AVX2 and LLAMA_NO_FMA options in CMAKE and build GPT4ALL.

Note that in order to run GPT4ALL even on older CPUs of the Nehalem architecture, an additional check of LLAMA_NO_AVX was not sufficient. You need to remove the option -mf16c from the Other C Flags in the Build Settings of the Xcode project made by CMAKE. GPT4ALL is working on my MacPro 2010 and macOS 10.13.6 (High Sierra).

largefield-ohara avatar Apr 10 '23 11:04 largefield-ohara

how to do that?

ikekatte avatar Apr 18 '23 18:04 ikekatte

Yep. I got the same error. 2.7 GHz 12-Core Intel Xeon E5. Mac OS Monterey. Graphics card AMD FirePro D500 3 GB. BTW What does the installer do? It ran and installed but what do I do after that?

RichardHightower avatar Apr 19 '23 21:04 RichardHightower

In the Try it yourself section of README.md, there is a link to a fork of llama.cpp. Use that clone to build the appropriate binary for your environment. Sometimes the built binaries do not work, but considering that they work in High Sierra, in most cases it is the hardware, not the OS, that causes the binaries not to work. For problems caused by the CPU, the solution I know is all that I posted before. Note that the binaries I built myself were compiled to barely work in the old environment, so their behavior is sluggish.

largefield-ohara avatar Apr 22 '23 05:04 largefield-ohara

Stale, please open a new issue if this still occurs

niansa avatar Aug 10 '23 15:08 niansa