gpt4all
gpt4all copied to clipboard
Achieve Arch Linux compatiblity.
I was able to install it:
Download Installer
chmod +x gpt4all-installer-linux.run
./gpt4all-installer-linux.run
cd <gpt4all-dir>/bin
./chat
But I am unable to select a download folder so far.
qrc:/gpt4all/qml/ModelDownloaderDialog.qml:329:13: QML FolderDialog: Failed to load non-native FolderDialog implementation: qrc:/qt-project.org/imports/QtQuick/Dialogs/quickimpl/qml/FolderDialog.qml:4 module "Qt.labs.folderlistmodel" is not installed
I installed: Arch's qt5-declarative provides Debian's qtdeclarative5-models-plugin, qml-module-qt-labs-settings, qml-module-qt-labs-folderlistmodel
But gpt4all is still complaining.
I will update here when the models downloaded so I can verify the folder issue is the only one and it hopefully works.
I have the a similar issue, I installed it from the AUR, a week ago it worked perfect but I updated it three days ago and when I run it always presents the following error:
QQmlApplicationEngine failed to load component
qrc:/gpt4all/main.qml: module "org.kde.desktop" is not installed
QCoreApplication::applicationDirPath: Please instantiate the QApplication object first
I just installed it fresh from the AUR and had no issues, but now I'm worried...
Which one?
I tried #1 ..the git url checks out but then the binary on the cli wants models first?
0 # Maintainer: mmxgn <[email protected]>
1
2 pkgname=gpt4all-git
3 _commit=1eeaa5c8eee2a1ced12c0756ee05a6139d8d5eb3
4 pkgver=r153.1eeaa5c
5 pkgrel=1
6 epoch=
7 pkgdesc="Assistant-style Chatbot with Large Scale Data Distillation from GPT-3.5-Turbo"
8 arch=('x86_64')
9 url="https://github.com/nomic-ai/gpt4all"
10 license=('Unlicensed')
11 depends=('bash')
12 optdepends=('aria2: for downloading models using magnet links')
13 makedepends=('git')
14 provides=("$pkgname")
15 conflicts=("$pkgname")
16 source=("gpt4all-repo::git+https://github.com/nomic-ai/gpt4all.git#commit=${_commit}"
17 "gpt4all")
18 sha256sums=('SKIP'
19 'SKIP')
20
21 pkgver() {
22 cd "$srcdir/gpt4all-repo"
23 printf "r%s.%s" "$(git rev-list --count HEAD)" "$(git rev-parse --short HEAD)"
24 }
25
26 prepare() {
27 cd "$srcdir/gpt4all-repo"
28 }
29
30 build() {
31 # No build step is necessary for this package
32 return 0
33 }
34
35 package() {
36 cd "$srcdir/gpt4all-repo"
37
38 # Install the binary
39 install -Dm755 "$srcdir/gpt4all-repo/chat/gpt4all-lora-quantized-linux-x86" "$pkgdir/usr/lib/gpt4all/gpt4all-lora-quantized-linux-x86"
40
41 # Install the wrapper script
42 install -Dm755 "$srcdir/gpt4all" "$pkgdir/usr/bin/gpt4all"
43 }
44
#609 for Ubuntu does not seem to work either BUT:
I downloaded the models with either the mac app or from the website and put them into the bin folder and it works great now.
Which one?
I tried #1 ..the git url checks out but then the binary on the cli wants models first?
0 # Maintainer: mmxgn <[email protected]> 1 2 pkgname=gpt4all-git 3 _commit=1eeaa5c8eee2a1ced12c0756ee05a6139d8d5eb3 4 pkgver=r153.1eeaa5c 5 pkgrel=1 6 epoch= 7 pkgdesc="Assistant-style Chatbot with Large Scale Data Distillation from GPT-3.5-Turbo" 8 arch=('x86_64') 9 url="https://github.com/nomic-ai/gpt4all" 10 license=('Unlicensed') 11 depends=('bash') 12 optdepends=('aria2: for downloading models using magnet links') 13 makedepends=('git') 14 provides=("$pkgname") 15 conflicts=("$pkgname") 16 source=("gpt4all-repo::git+https://github.com/nomic-ai/gpt4all.git#commit=${_commit}" 17 "gpt4all") 18 sha256sums=('SKIP' 19 'SKIP') 20 21 pkgver() { 22 cd "$srcdir/gpt4all-repo" 23 printf "r%s.%s" "$(git rev-list --count HEAD)" "$(git rev-parse --short HEAD)" 24 } 25 26 prepare() { 27 cd "$srcdir/gpt4all-repo" 28 } 29 30 build() { 31 # No build step is necessary for this package 32 return 0 33 } 34 35 package() { 36 cd "$srcdir/gpt4all-repo" 37 38 # Install the binary 39 install -Dm755 "$srcdir/gpt4all-repo/chat/gpt4all-lora-quantized-linux-x86" "$pkgdir/usr/lib/gpt4all/gpt4all-lora-quantized-linux-x86" 40 41 # Install the wrapper script 42 install -Dm755 "$srcdir/gpt4all" "$pkgdir/usr/bin/gpt4all" 43 } 44
I tried the 1 and it works perfectly, but the 2, which was the one I used before, stopped working... today I updated the AUR of the 2 again, but it still presents the same problem, and from what I see in console it is a gui problem because the model loads it.
deserializing chats took: 0 ms
QQmlApplicationEngine failed to load component
qrc:/gpt4all/main.qml: module "org.kde.desktop" is not installed
serializing chats took: 0 ms
QCoreApplication::applicationDirPath: Please instantiate the QApplication object first
llama.cpp: loading model from /home/myuser/.local/share/nomic.ai/GPT4All//ggml-stable-vicuna-13B.q4_2.bin
llama_model_load_internal: format = ggjt v1 (latest)
llama_model_load_internal: n_vocab = 32001
llama_model_load_internal: n_ctx = 2048
llama_model_load_internal: n_embd = 5120
llama_model_load_internal: n_mult = 256
llama_model_load_internal: n_head = 40
llama_model_load_internal: n_layer = 40
llama_model_load_internal: n_rot = 128
llama_model_load_internal: ftype = 5 (mostly Q4_2)
llama_model_load_internal: n_ff = 13824
llama_model_load_internal: n_parts = 1
llama_model_load_internal: model size = 13B
llama_model_load_internal: ggml ctx size = 73,73 KB
llama_model_load_internal: mem required = 9807,47 MB (+ 1608,00 MB per state)
llama_init_from_file: kv self size = 1600,00 MB
I just found the soruce of the issue, this days I was trying to solve some problems I had with gtk themes in qt applications, and I had this line in .profile, I removed it and problem was solved :
export QT_QPA_PLATFORMTHEME=gnome
@JoZ3 In what profile did you exactly remove this line in? I could not find any profile file in the nomic application.
@dinnerisserved that line is from my system configuration, the ~/.profile file, I use gnome-shell and I wanted the qt applications to be well integrated in the system.
Hi there!
I downloaded gpt4all-chat-git
from the AUR and it all seems to work quite well: the UI loads and I've downloaded a bunch of models to mess around with.
I do have an issue tho: It's stuck on "Loading model..." (either it's stuck or I have a massively slow laptop...) Any idea how long this loading process usually takes?
Hi there! I downloaded
gpt4all-chat-git
from the AUR and it all seems to work quite well: the UI loads and I've downloaded a bunch of models to mess around with.I do have an issue tho: It's stuck on "Loading model..." (either it's stuck or I have a massively slow laptop...) Any idea how long this loading process usually takes?
I'm having the same issue on a ryzen 9 5900x
The only log:
kyngs@dada ~ [SIGINT]> gpt4all-chat
[Debug] (Sun Jun 4 12:19:37 2023): deserializing chats took: 0 ms
Very interesting, I'm on a Ryzen 5 4600H but my logs look like:
[Debug] (Sun Jun 4 11:35:39 2023): deserializing chats took: 0 ms
[Debug] (Sun Jun 4 11:36:41 2023): ERROR: Couldn't parse: "" "illegal value"
[Debug] (Sun Jun 4 11:36:41 2023): ERROR: Couldn't parse: "" "illegal value"
[Debug] (Sun Jun 4 11:36:41 2023): ERROR: Couldn't parse: "" "illegal value"
[Debug] (Sun Jun 4 11:36:41 2023): ERROR: Couldn't parse: "" "illegal value"
It strikes me as odd that I downloaded 4 models and the error is printed 4 times. I'm guessing It can't find them?
EDIT:
It also says Couldn't find tool at "/usr/bin/../maintenancetool" so cannot check for updates!
although I understand that's a different issue.
I have the same "loading model" issue. There is a comment on the AUR page that might be of interest:
Tried a bit around: make install, builds 3 libs: - "libllmodel.so" - "libllmodel.so.0" - "libllmodel.so.0.2.0" But in projdir/build/bin/ are more libs, when you add one lib "libgptj-default.so" to the 3 libs above and have the binary "chat" in the same folder it works. Unfortunately, it doesn't work when you copy all libs into their folder /lib/.
https://aur.archlinux.org/packages/gpt4all-chat-git
It appears that this is fixed, at least on my system.
Yeah, it works well on mine.
I can also confirm it's been fixed.
For me it does not work via the AUR git package:
[Warning] (Thu Aug 31 23:01:18 2023): WARNING: Could not download models.json synchronously
[Debug] (Thu Aug 31 23:01:19 2023): deserializing chats took: 1 ms
[Warning] (Thu Aug 31 23:01:19 2023): QQmlApplicationEngine failed to load component
[Warning] (Thu Aug 31 23:01:19 2023): qrc:/gpt4all/main.qml: module "gtk2" is not installed
[Warning] (Thu Aug 31 23:01:19 2023): "ERROR: Modellist download failed with error code \"5-Operation canceled\""
[Warning] (Thu Aug 31 23:01:19 2023): QIODevice::read (QNetworkReplyHttpImpl): device not open
[Warning] (Thu Aug 31 23:01:19 2023): ERROR: Couldn't parse: "" "illegal value"
[Warning] (Thu Aug 31 23:01:19 2023): QIODevice::read (QNetworkReplyHttpImpl): device not open
[Warning] (Thu Aug 31 23:01:19 2023): ERROR: Couldn't parse: "" "illegal value"
I can access gpt4all via the terminal but not the GUI Chat...any help?