private-gpt
private-gpt copied to clipboard
17958 illegal hardware instruction python privateGPT.py
Hello
I followed the step in readme and when I run python privateGPT.py
I had the following error
llama.cpp: can't use mmap because tensors are not aligned; convert to new format to avoid this
llama_model_load_internal: format = 'ggml' (old version with low tokenizer quality and no mmap support)
llama_model_load_internal: n_vocab = 32000
llama_model_load_internal: n_ctx = 1000
llama_model_load_internal: n_embd = 4096
llama_model_load_internal: n_mult = 256
llama_model_load_internal: n_head = 32
llama_model_load_internal: n_layer = 32
llama_model_load_internal: n_rot = 128
llama_model_load_internal: ftype = 2 (mostly Q4_0)
llama_model_load_internal: n_ff = 11008
llama_model_load_internal: n_parts = 1
llama_model_load_internal: model size = 7B
llama_model_load_internal: ggml ctx size = 4113748.20 KB
llama_model_load_internal: mem required = 5809.33 MB (+ 2052.00 MB per state)
...................................................................................................
.
llama_init_from_file: kv self size = 1000.00 MB
AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | VSX = 0 |
Using embedded DuckDB with persistence: data will be stored in: db
[1] 17958 illegal hardware instruction python privateGPT.py```
Looks like your CPU may not be supported, I ran into this issue using a VirtualBox VM
Using Windows in Hyper-V, I had AVX and AVX2 = 1 and that seemed to work which leads me to believe these may be necessary?
Getting the same error on Apple M1
zsh: illegal hardware instruction python privateGPT.py
same error here using M1 Max
49616 illegal hardware instruction python privateGPT.py
Same error on a Mac M1 as well
Getting same error in Macbook Air M2
I was able to get it working on VMWare Workstation running Fedora, definitely need AVX and AVX2 by the looks of it. I'm not on Apple silicon though, maybe Apple silicon isn't a supported arch yet.
Got the same error on M1 Mac upon running:
python privateGPT.py
Using embedded DuckDB with persistence: data will be stored in: db
zsh: illegal hardware instruction python privateGPT.py
Running Python 3.10.11, macOS 12.1
Same on M1, Python 3.10.11, macOS 12.6. The error occurs only while running privateGPT.py. The ingest.py script runs without any error. No difference with or without Rosetta
I got it to work by making sure the Python installation was for arm64. To check:
> python3
>>> import platform
>>> platform.uname()
Make sure this returns a string containing machine='arm64'
. If this is not the case and you are using miniconda, make sure to install miniconda for M1 and when creating the environment, prepend CONDA_SUBDIR=osx-arm64
like so:
CONDA_SUBDIR=osx-arm64 conda create -n privategpt python=3.10
After installing the requirements I got privategpt.py running, asking for a question. Now it fails because of a missing index, but I assume this is just because I have not run ingest.py yet.
I got it to work by making sure the Python installation was for arm64. To check:
> python3 >>> import platform >>> platform.uname()
Make sure this returns a string containing
machine='arm64'
. If this is not the case and you are using miniconda, make sure to install miniconda for M1 and when creating the environment, prependCONDA_SUBDIR=osx-arm64
like so:CONDA_SUBDIR=osx-arm64 conda create -n privategpt python=3.10
After installing the requirements I got privategpt.py running, asking for a question. Now it fails because of a missing index, but I assume this is just because I have not run ingest.py yet.
Thanks Alexander this fixed the issue I was having and I now have it running on my M1
@briancunningham6 I'm using anaconda3
import platform platform.uname() uname_result(system='Darwin', node='Brians-MacBook-Pro.local', release='22.4.0', version='Darwin Kernel Version 22.4.0: Mon Mar 6 20:59:28 PST 2023; root:xnu-8796.101.5~3/RELEASE_ARM64_T6000', machine='arm64')
Any ideas?
I got it to work by making sure the Python installation was for arm64. To check:
> python3 >>> import platform >>> platform.uname()
Make sure this returns a string containing
machine='arm64'
. If this is not the case and you are using miniconda, make sure to install miniconda for M1 and when creating the environment, prependCONDA_SUBDIR=osx-arm64
like so:CONDA_SUBDIR=osx-arm64 conda create -n privategpt python=3.10
After installing the requirements I got privategpt.py running, asking for a question. Now it fails because of a missing index, but I assume this is just because I have not run ingest.py yet.
Thanks Alexander this fixed the issue I was having and I now have it running on my M1
I tried the same and my machine is machine='x86_64' any suggestions for this?