private-gpt icon indicating copy to clipboard operation
private-gpt copied to clipboard

Error when running "python privateGPT.py": cannot pickle '_pygptj.gptj_model' object (type=type_error)

Open ZhaotingLi opened this issue 2 years ago • 6 comments

Hi,

I am trying to run the command "python privateGPT.py" to use the privateGPT tool, but I am encountering an error that prevents me from doing so. The error message I receive is as follows:

llama.cpp: loading model from ./models/ggml-model-q4_0.bin
llama.cpp: can't use mmap because tensors are not aligned; convert to new format to avoid this
llama_model_load_internal: format     = 'ggml' (old version with low tokenizer quality and no mmap support)
llama_model_load_internal: n_vocab    = 32000
llama_model_load_internal: n_ctx      = 512
llama_model_load_internal: n_embd     = 4096
llama_model_load_internal: n_mult     = 256
llama_model_load_internal: n_head     = 32
llama_model_load_internal: n_layer    = 32
llama_model_load_internal: n_rot      = 128
llama_model_load_internal: ftype      = 2 (mostly Q4_0)
llama_model_load_internal: n_ff       = 11008
llama_model_load_internal: n_parts    = 1
llama_model_load_internal: model size = 7B
llama_model_load_internal: ggml ctx size = 4113748.20 KB
llama_model_load_internal: mem required  = 5809.33 MB (+ 2052.00 MB per state)
...................................................................................................
.
llama_init_from_file: kv self size  =  512.00 MB
AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 | 
Using embedded DuckDB with persistence: data will be stored in: db
gptj_model_load: loading model from './models/ggml-gpt4all-j-v1.3-groovy.bin' - please wait ...
gptj_model_load: n_vocab = 50400
gptj_model_load: n_ctx   = 2048
gptj_model_load: n_embd  = 4096
gptj_model_load: n_head  = 16
gptj_model_load: n_layer = 28
gptj_model_load: n_rot   = 64
gptj_model_load: f16     = 2
gptj_model_load: ggml ctx size = 4505.45 MB
gptj_model_load: memory_size =   896.00 MB, n_mem = 57344
gptj_model_load: ................................... done
gptj_model_load: model size =  3609.38 MB / num tensors = 285
Traceback (most recent call last):
  File "privateGPT.py", line 39, in <module>
    main()
  File "privateGPT.py", line 16, in main
    qa = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=retriever, return_source_documents=True)
  File "/home/lzt/.local/lib/python3.8/site-packages/langchain/chains/retrieval_qa/base.py", line 91, in from_chain_type
    combine_documents_chain = load_qa_chain(
  File "/home/lzt/.local/lib/python3.8/site-packages/langchain/chains/question_answering/__init__.py", line 218, in load_qa_chain
    return loader_mapping[chain_type](
  File "/home/lzt/.local/lib/python3.8/site-packages/langchain/chains/question_answering/__init__.py", line 63, in _load_stuff_chain
    llm_chain = LLMChain(
  File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for LLMChain
llm
  cannot pickle '_pygptj.gptj_model' object (type=type_error)

I am not sure what is causing this error or how to fix it. I have tried searching for solutions online, but have not been able to find anything that works for me.

Here are some additional details that might be helpful:

  • I am running the command on a Ubuntu 20.04 machine.
  • I have installed all the necessary dependencies as instructed in the documentation.
  • I am using the "ggml-gpt4all-j-v1.3-groovy.bin" model.
  • I have successfully run the ingest command.

If anyone has any ideas on how to fix this error, I would greatly appreciate your help. Thank you in advance!

ZhaotingLi avatar May 11 '23 14:05 ZhaotingLi

Did you run python .\ingest.py .\source_documents\state_of_the_union.txt first?

R-Y-M-R avatar May 11 '23 14:05 R-Y-M-R

Yes, the ingest part works normally and I can find the "db/index" directory after executing python ingest.py source_documents/state_of_the_union.txt.

ZhaotingLi avatar May 11 '23 15:05 ZhaotingLi

i want to solve this problem!! is it ok with you?

abdullahnoori257 avatar May 13 '23 09:05 abdullahnoori257

problem solved? found something here https://github.com/hwchase17/langchain/issues/1986, not sure if this helps>.

psinha30 avatar May 15 '23 06:05 psinha30

ok using pydantic=1.9.0 solves this

psinha30 avatar May 15 '23 06:05 psinha30

Can confirm @psinha30 solution

ok using pydantic=1.9.0 solves this

Normandabald avatar May 24 '23 22:05 Normandabald