private-gpt
private-gpt copied to clipboard
after yesterday update it get error , couldn't run it
llama.cpp: loading model from ./models/ggml-model-q4_0.bin
llama.cpp: can't use mmap because tensors are not aligned; convert to new format to avoid this
llama_model_load_internal: format = 'ggml' (old version with low tokenizer quality and no mmap support)
llama_model_load_internal: n_vocab = 32000
llama_model_load_internal: n_ctx = 512
llama_model_load_internal: n_embd = 4096
llama_model_load_internal: n_mult = 256
llama_model_load_internal: n_head = 32
llama_model_load_internal: n_layer = 32
llama_model_load_internal: n_rot = 128
llama_model_load_internal: ftype = 2 (mostly Q4_0)
llama_model_load_internal: n_ff = 11008
llama_model_load_internal: n_parts = 1
llama_model_load_internal: model size = 7B
llama_model_load_internal: ggml ctx size = 4113748.20 KB
llama_model_load_internal: mem required = 5809.33 MB (+ 2052.00 MB per state)
...................................................................................................
.
llama_init_from_file: kv self size = 512.00 MB
AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 |
Using embedded DuckDB with persistence: data will be stored in: db
Traceback (most recent call last):
File "/home/rex/privateGPT/privateGPT.py", line 39, in pip install pygpt4all
. (type=value_error)
[2023-05-10 15:44:11,424] {duckdb.py:414} INFO - Persisting DB to disk, putting it in the save folder: db
try install pygpt4all already , but same errors.
Getting the same error. It was mentioned here as well: https://github.com/imartinez/privateGPT/issues/11#issuecomment-1540013300
I too am getting the same error.. has anybody found a fix for this?
Getting the same error!!
it's a langchain issue, see https://github.com/hwchase17/langchain/issues/3839#issuecomment-1544909631