lollms-webui
lollms-webui copied to clipboard
does not answer
when I run the run.bat the server will start and when I entered a question the terminal shows:
system_info: n_threads = 8 / 16 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 |
but the answer never shows up.I downloaded the normal GPT4All and it works fast with no issue, the problem is only on GPT4ALL-UI
similar issue here on MacOS.
[2023-04-09 23:33:05,899] {_internal.py:224} INFO - 127.0.0.1 - - [09/Apr/2023 23:33:05] "POST /bot HTTP/1.1" 200 -
llama_generate: seed = 1681097585
system_info: n_threads = 8 / 10 | AVX = 0 | AVX2 = 0 | AVX512 = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | VSX = 0 |
run.sh: line 43: 47326 Segmentation fault: 11 python app.py
james@james gpt4all-ui %
line 43 is this one....
self.add_endpoint(
"/new_discussion", "new_discussion", self.new_discussion, methods=["GET"]
)
using defaults and supplied .sh script. i'm figuring its my python environment but the install document says it uses a venv so IDK 🤔
Hi everyone, and thanks for testing. venv is a little different from conda. in conda, you can actually setup a whole different python environment but in venv, it is a layer on top of python so it needs a python interpreter that will be copied to the env dir. So yes, you do depend on the installed python.
I'm seeing the same thing using docker compose on Ubuntu 20.04.6 LTS
gpt4all-ui-webui-1 | [2023-04-10 15:59:23,311] {_internal.py:224} INFO - 174.63.27.201 - - [10/Apr/2023 15:59:23] "POST /bot HTTP/1.1" 200 - gpt4all-ui-webui-1 | llama_generate: seed = 1681142363 gpt4all-ui-webui-1 | gpt4all-ui-webui-1 | system_info: n_threads = 8 / 8 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 | gpt4all-ui-webui-1 exited with code 139
It shouldn't matter what versions of python I have installed on the server since this is containerized right?
yes it shouldn't. I'll have to ask the community to test and try to figure out a solution as I am on a vacation and have no access to any Linux box. I'll check it out when I come back. But if someone finds a solution, please modify the install.sh accordingly and do a pull request. I'll accept it if it is ok.
is there a conda env on linux? anyone got luck running this on other than ubuntu? like debian11 mayhaps? :3
is there a conda env on linux? anyone got luck running this on other than ubuntu? like debian11 mayhaps? :3
I have it running on Debian 11 with Python3.11.2 spun up as of yesterday. All working fine. I had to add cmake and then sentencepiece otherwise the install failed, then I converted ggml to ggjt to get it to work properly.
echo "Installing requirements..." export DS_BUILD_OPS=0 export DS_BUILD_AIO=0 python3.11 -m pip install pip --upgrade python3.11 -m pip install cmake python3.11 -m pip install sentencepiece python3.11 -m pip install -r requirements.txt
(env) root@AI:/home/gpt4all-ui# python3 -V Python 3.11.2
(env) root@AI:/home/gpt4all-ui# cat /etc/os-release PRETTY_NAME="Debian GNU/Linux 11 (bullseye)" NAME="Debian GNU/Linux" VERSION_ID="11" VERSION="11 (bullseye)" VERSION_CODENAME=bullseye ID=debian HOME_URL="https://www.debian.org/" SUPPORT_URL="https://www.debian.org/support" BUG_REPORT_URL="https://bugs.debian.org/"
is there a conda env on linux? anyone got luck running this on other than ubuntu? like debian11 mayhaps? :3
I have it running on Debian 11 with Python3.11.2 spun up as of yesterday. All working fine. I had to add cmake and then sentencepiece otherwise the install failed, then I converted ggml to ggjt to get it to work properly.
echo "Installing requirements..." export DS_BUILD_OPS=0 export DS_BUILD_AIO=0 python3.11 -m pip install pip --upgrade python3.11 -m pip install cmake python3.11 -m pip install sentencepiece python3.11 -m pip install -r requirements.txt
(env) root@AI:/home/gpt4all-ui# python3 -V Python 3.11.2
(env) root@AI:/home/gpt4all-ui# cat /etc/os-release PRETTY_NAME="Debian GNU/Linux 11 (bullseye)" NAME="Debian GNU/Linux" VERSION_ID="11" VERSION="11 (bullseye)" VERSION_CODENAME=bullseye ID=debian HOME_URL="https://www.debian.org/" SUPPORT_URL="https://www.debian.org/support" BUG_REPORT_URL="https://bugs.debian.org/"
i already had cmake, but didnt have that sentencemixer.
still illegal instruction
run.sh: line 43: 1652 Illegal instruction python app.py
sd@debian-sd:~/gpt4all-ui$
sd@debian-sd:~/gpt4all-ui$ cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 11 (bullseye)"
NAME="Debian GNU/Linux"
VERSION_ID="11"
VERSION="11 (bullseye)"
VERSION_CODENAME=bullseye
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
is there a conda env on linux? anyone got luck running this on other than ubuntu? like debian11 mayhaps? :3
I have it running on Debian 11 with Python3.11.2 spun up as of yesterday. All working fine. I had to add cmake and then sentencepiece otherwise the install failed, then I converted ggml to ggjt to get it to work properly. echo "Installing requirements..." export DS_BUILD_OPS=0 export DS_BUILD_AIO=0 python3.11 -m pip install pip --upgrade python3.11 -m pip install cmake python3.11 -m pip install sentencepiece python3.11 -m pip install -r requirements.txt (env) root@AI:/home/gpt4all-ui# python3 -V Python 3.11.2 (env) root@AI:/home/gpt4all-ui# cat /etc/os-release PRETTY_NAME="Debian GNU/Linux 11 (bullseye)" NAME="Debian GNU/Linux" VERSION_ID="11" VERSION="11 (bullseye)" VERSION_CODENAME=bullseye ID=debian HOME_URL="https://www.debian.org/" SUPPORT_URL="https://www.debian.org/support" BUG_REPORT_URL="https://bugs.debian.org/"
i already had cmake, but didnt have that sentencemixer.
still illegal instruction
run.sh: line 43: 1652 Illegal instruction python app.py sd@debian-sd:~/gpt4all-ui$
sd@debian-sd:~/gpt4all-ui$ cat /etc/os-release PRETTY_NAME="Debian GNU/Linux 11 (bullseye)" NAME="Debian GNU/Linux" VERSION_ID="11" VERSION="11 (bullseye)" VERSION_CODENAME=bullseye ID=debian HOME_URL="https://www.debian.org/" SUPPORT_URL="https://www.debian.org/support" BUG_REPORT_URL="https://bugs.debian.org/"
Hey @andzejsp it looks like you aren't running venv ?
I did a fresh install of the repo today and still had to migrate ggml to ggjt but other than that it works fine on Debian 11.
wget https://raw.githubusercontent.com/ggerganov/llama.cpp/master/migrate-ggml-2023-03-30-pr613.py
python3 migrate-ggml-2023-03-30-pr613.py models/gpt4all-lora-quantized-ggml.bin models/gpt4all-lora-quantized-ggjt.bin
source env/bin/activate
python3 app.py --model gpt4all-lora-quantized-ggjt.bin
is there a conda env on linux? anyone got luck running this on other than ubuntu? like debian11 mayhaps? :3
I have it running on Debian 11 with Python3.11.2 spun up as of yesterday. All working fine. I had to add cmake and then sentencepiece otherwise the install failed, then I converted ggml to ggjt to get it to work properly. echo "Installing requirements..." export DS_BUILD_OPS=0 export DS_BUILD_AIO=0 python3.11 -m pip install pip --upgrade python3.11 -m pip install cmake python3.11 -m pip install sentencepiece python3.11 -m pip install -r requirements.txt (env) root@AI:/home/gpt4all-ui# python3 -V Python 3.11.2 (env) root@AI:/home/gpt4all-ui# cat /etc/os-release PRETTY_NAME="Debian GNU/Linux 11 (bullseye)" NAME="Debian GNU/Linux" VERSION_ID="11" VERSION="11 (bullseye)" VERSION_CODENAME=bullseye ID=debian HOME_URL="https://www.debian.org/" SUPPORT_URL="https://www.debian.org/support" BUG_REPORT_URL="https://bugs.debian.org/"
i already had cmake, but didnt have that sentencemixer. still illegal instruction
run.sh: line 43: 1652 Illegal instruction python app.py sd@debian-sd:~/gpt4all-ui$
sd@debian-sd:~/gpt4all-ui$ cat /etc/os-release PRETTY_NAME="Debian GNU/Linux 11 (bullseye)" NAME="Debian GNU/Linux" VERSION_ID="11" VERSION="11 (bullseye)" VERSION_CODENAME=bullseye ID=debian HOME_URL="https://www.debian.org/" SUPPORT_URL="https://www.debian.org/support" BUG_REPORT_URL="https://bugs.debian.org/"
Hey @andzejsp it looks like you aren't running venv ?
I did a fresh install of the repo today and still had to migrate ggml to ggjt but other than that it works fine on Debian 11.
wget https://raw.githubusercontent.com/ggerganov/llama.cpp/master/migrate-ggml-2023-03-30-pr613.py python3 migrate-ggml-2023-03-30-pr613.py models/gpt4all-lora-quantized-ggml.bin models/gpt4all-lora-quantized-ggjt.bin source env/bin/activate python3 app.py --model gpt4all-lora-quantized-ggjt.bin
I have messed up my pythion installation, trying to install from source, no apt, and in the end i nuked the debian VM, went with ubuntu. Same problem arised in ubuntu, but it turned out that the AVX was not enabled on the CPU. All i had to do is set the CPU to host and now it works.
Although i cant seem to convert any of them models, except the default model that the UI downloads.
gpt4all-unfiltered - does not work
ggml-vicuna-7b-4bit - does not work
vicuna-13b-GPTQ-4bit-128g - does not work
LLaMa-Storytelling-4Bit - does not work
Later ill see and maybe open issues about the specific errors it gives.
the vicuna models are already in the right format normally. So you don't need to convert them.