lollms-webui icon indicating copy to clipboard operation
lollms-webui copied to clipboard

run.sh: line 43: 79546 Illegal instruction (core dumped) python app.py

Open ItzChr1s1 opened this issue 1 year ago • 15 comments

Expected Behavior

It seems like I have installed everything successfully but when I start run.sh. It is unable to start.

Current Behavior

"run.sh: line 43: 79546 Illegal instruction (core dumped) python app.py"

Steps to Reproduce

Please provide detailed steps to reproduce the issue.

  1. I started with https://github.com/nomic-ai/gpt4all-ui instructions of git clone and install

  2. I received an error and went to an issue that already had it https://github.com/nomic-ai/gpt4all-ui/issues/20 and ran "sudo apt install python3.11-dev" which worked.

  3. I then went here and followed the instructions: https://github.com/nomic-ai/gpt4all-ui/blob/main/docs/Linux_Osx_Install.md and sucessfully installed and told me to run.sh

  4. Now when I use bash run.sh I receive the line 43: 79546 error above.

Screenshots

Screenshot 2023-04-08 at 5 18 39 PM

ItzChr1s1 avatar Apr 08 '23 21:04 ItzChr1s1

Saw this same issue trying to help a friend get set up. They had a 2020 M1.

sadalsvvd avatar Apr 09 '23 00:04 sadalsvvd

Saw this same issue trying to help a friend get set up. They had a 2020 M1.

hm I'm running ProxMox and using an Ubuntu VM to set it up but it is on a 2017 imac too

ItzChr1s1 avatar Apr 09 '23 00:04 ItzChr1s1

I get run.sh: line 43: 7657 Segmentation fault (core dumped) python app.py on Ubuntu 22.04 also after running sudo apt install python3.11-dev for the installation.

yzimmermann avatar Apr 09 '23 20:04 yzimmermann

same for me , there is library problem with python 3.11 Checking discussions database... Ok llama_model_load: loading model from './models/gpt4all-lora-quantized.bin' - please wait ... llama_model_load: invalid model file './models/gpt4all-lora-quantized.bin' (too old, regenerate your model files or convert them with convert-unversioned-ggml-to-ggml.py!) llama_init_from_file: failed to load model llama_generate: seed = 1681074177

system_info: n_threads = 8 / 4 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 | ./run.sh: line 43: 6652 Segmentation fault (core dumped) python app.py

crypto-maniac avatar Apr 09 '23 21:04 crypto-maniac

I created a new Ubuntu VM, went through the process and ended in the exact same error message rip.

I attempted the docker setup instead it successfully built but when trying to run it kind of just stops with this message (I do have gpt4all-lora-quantized-ggml.bin inside the models folder): gpt4all-ui_webui_1 exited with code 132

Screenshot 2023-04-09 at 5 24 20 PM

ItzChr1s1 avatar Apr 09 '23 21:04 ItzChr1s1

I'm getting the segmentation fault error @yzimmermann mentioned.

My logs show the following, in case the additional detail is helpful:

Apr 9 20:34:03 localhost kernel: python[15729]: segfault at 14d0 ip 00007f75cf6824f3 sp 00007f75ce69bd10 error 4 in _pyllamacpp.cpython-311-x86_64-linux-gnu.so[7f75cf676000+62000] Apr 9 20:34:03 localhost kernel: Code: 54 f1 10 48 8d 7c 24 28 4c 89 f9 48 8b 70 08 e8 03 3e ff ff 48 8b 7c 24 28 4c 8b 2b ba 07 69 0f c7 48 8b 77 e8 e8 1d 4d ff ff <4d> 8b 45 08 31 d2 49 89 c1 49 f7 f0 49 8b 45 00 48 8b 04 d0 49 89 Apr 9 20:34:03 localhost systemd[1]: Started Process Core Dump (PID 15730/UID 0). Apr 9 20:34:04 localhost systemd-coredump[15731]: Resource limits disable core dumping for process 15727 (python). Apr 9 20:34:04 localhost systemd-coredump[15731]: Process 15727 (python) of user 0 dumped core. Apr 9 20:34:04 localhost systemd[1]: [email protected]: Succeeded.

Edit: This is on a fresh clone on a server that was running a previous version without issue

tljoy avatar Apr 10 '23 02:04 tljoy

I got run.sh: line 43: 21412 Illegal instruction python app.py after running bash run.sh.

I also installed python 3.11.1 on debian 11

Im also trying to run this on proxmox VM debian 11

EDIT:

I at first debian 11 has python 3.9 by default. installed 3.11.3 based on this tutorial https://techviewleo.com/how-to-install-python-3-on-debian/

And then switched to 3.11 version.

Tried to edit run.sh line 43 by adding python3 to the launch command.

Still having the same error

andzejsp avatar Apr 10 '23 12:04 andzejsp

I created a new Ubuntu VM, went through the process and ended in the exact same error message rip.

I attempted the docker setup instead it successfully built but when trying to run it kind of just stops with this message (I do have gpt4all-lora-quantized-ggml.bin inside the models folder): gpt4all-ui_webui_1 exited with code 132

Screenshot 2023-04-09 at 5 24 20 PM

Similar problem here. The container starts but after i ask something to the bot, the container exits with error code 139

fvillena avatar Apr 10 '23 15:04 fvillena

Well its not exactly similar to what im having. I have no idea how to make it run in debian. Still getting illegal instruction when trying to launch python app.py.

andzejsp avatar Apr 11 '23 08:04 andzejsp

I can confirm that this error persists in ubuntu 22.04.2 aswell:

image image

Sorry for posting images, im running this on proxmox and not set up ssh just yet.

As my OS had python3.10 i used bash install.3.10.sh to install this.

Also after i run the run bash th eOS throws me this error:

image

andzejsp avatar Apr 12 '23 13:04 andzejsp

Debugging this more, i started commenting out imports from app.py. When i comment out pyllamacpp (line 29) in app.py i get some error.

Yes i know, commenting out code blocks will give errors. But everything else gave me the same illegal instruction. image

So my hunch is that that import has something to do with not being able to run on Ubuntu/debian in VM

andzejsp avatar Apr 12 '23 14:04 andzejsp

It seems like a pyllamacpp bug. Please report it on their repo.

ParisNeo avatar Apr 12 '23 15:04 ParisNeo

Actually it was CPU fault, the VM didnt have, i guess, the right instruction set so it was illegal. I switched CPU to use host CPU (in proxmox GUI) and now im getting to the web page. But got error that model is bad.

The windows installation asks to convert the model after it downloads it, but the linux install.3.10.sh script does not.

andzejsp avatar Apr 13 '23 05:04 andzejsp

I was checking the scripts written by the community to install the models and for macos and linux, they miss the conversion step. So I'm actually upgrading the scripts to make them do the conversion. unfortunately I'm doing this blindly and these days, all focus is on backend so I am litterally working alone on this project right now. I need feedback from the community. When I finish coding the script, i'll tell you, you test and report any problems and I think we'll fix it together if you want.

ParisNeo avatar Apr 13 '23 06:04 ParisNeo

I was checking the scripts written by the community to install the models and for macos and linux, they miss the conversion step. So I'm actually upgrading the scripts to make them do the conversion. unfortunately I'm doing this blindly and these days, all focus is on backend so I am litterally working alone on this project right now. I need feedback from the community. When I finish coding the script, i'll tell you, you test and report any problems and I think we'll fix it together if you want.

no worries man. it looks like i'm not alone on the issue, but regardless you're doing great. One step closer! 👍

ItzChr1s1 avatar Apr 13 '23 06:04 ItzChr1s1