gpt4all icon indicating copy to clipboard operation
gpt4all copied to clipboard

Publish builds for aarch64 Linux

Open Delamcode opened this issue 2 years ago • 15 comments

Could be get a Linux Arm64 binary? Thank you!

Delamcode avatar Mar 30 '23 17:03 Delamcode

would also appreciate

krixoko avatar Apr 03 '23 11:04 krixoko

I would also like this so I can run GPT4All in termux

Night3890 avatar Apr 08 '23 05:04 Night3890

+1,插眼

knigth95 avatar Apr 08 '23 06:04 knigth95

+1

Euro-pol avatar Apr 10 '23 14:04 Euro-pol

+1 🥰

mael-bomane avatar Apr 29 '23 01:04 mael-bomane

+1

beeemT avatar May 06 '23 09:05 beeemT

+1

Orgoth avatar Jun 04 '23 06:06 Orgoth

+1

ManjTi avatar Jun 04 '23 09:06 ManjTi

Is there any workaround for building and installing on Linux arm64?

canyonrobins avatar Jun 06 '23 22:06 canyonrobins

+1, jetson nano cuba

Xatinni avatar Jun 16 '23 04:06 Xatinni

+1, waiting for this

netandreus avatar Jul 22 '23 20:07 netandreus

+1

slavekrouta avatar Jul 25 '23 14:07 slavekrouta

+1

devloic avatar Jul 25 '23 23:07 devloic

Is there any workaround for building and installing on Linux arm64?

I was able to build the backend and the python package by following this documentation: https://github.com/nomic-ai/gpt4all/blob/main/gpt4all-bindings/python/README.md

baselakasha avatar Aug 27 '23 20:08 baselakasha

Hi, I build this on the Jetson dev kit with Ubuntu 20.04 aarch64 with following requirements

  • qt 6.7.0 (using aqt installer to install linux_arm64 version of qt 6.7.0 with all the dependencies, e.g. aqt install-qt linux_arm64 desktop 6.7.0 -m all) now only 6.7.0 seems to have everything needed for compiling this.
  • g++ and GCC 10.5.0 (you can either build it or find it in some test repo)
  • cmake >= 3.22.0 build from source

when you compile the GPT4All, you need to provide the path to cudatoolkit as the newly compiled cmake 3.22 don't able to find it by itself for some reason ~at least for me~ (e.g. cmake . -DCUDAToolkit_ROOT=/usr/local/cuda-11.4)

go to the build folder for "gpt4all-chat", note that the qt creator come with the 20.04 system don't work properly(~might be my problem~), so I use cli instead,

qmake -project # make sure it's the 6.7.0 version instead of the system one!!!
qmake
cmake . -DCUDAToolkit_ROOT=/usr/local/cuda-11.4 # might be redundant for you
make -j4

and everything should compile and run normally, the cuda acceleration works very well!

Note: for 8G RAM devkit, if you want to use 7b models, please build and use python backend only, using QT chat UI only support 4Gram models without crashing due to low memory(even headless startup).

Note2: it can also built without cuda, so I think this should also work for raspberry pi.

Note3: please compile from the main branch or at least v2.8.0 and above, as tested v2.7.5 and below will not compile with this method.

benhaotang avatar Jun 15 '24 21:06 benhaotang