private-gpt icon indicating copy to clipboard operation
private-gpt copied to clipboard

File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__ pydantic.error_wrappers.ValidationError: 1 validation error for LlamaCppEmbeddings

Open sandyrs9421 opened this issue 1 year ago • 9 comments

i am seeing below error when i run the ingest.py. any thoughts on how i can resolve it ? kindly advise

Error - error loading model: this format is no longer supported (see https://github.com/ggerganov/llama.cpp/pull/1305) llama_init_from_file: failed to load model Traceback (most recent call last): File "/Users/FBT/Desktop/Projects/privategpt/privateGPT/ingest.py", line 39, in main() File "/Users/FBT/Desktop/Projects/privategpt/privateGPT/ingest.py", line 30, in main llama = LlamaCppEmbeddings(model_path="./models/ggml-model-q4_0.bin") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "pydantic/main.py", line 341, in pydantic.main.BaseModel.init pydantic.error_wrappers.ValidationError: 1 validation error for LlamaCppEmbeddings root Could not load Llama model from path: ./models/ggml-model-q4_0.bin. Received error (type=value_error)

My ENV file - PERSIST_DIRECTORY=db MODEL_TYPE=GPT4All MODEL_PATH=models/ggml-gpt4all-j-v1.3-groovy.bin EMBEDDINGS_MODEL_NAME=/Users/FBT/Desktop/Projects/privategpt/privateGPT/models/ggml-model-q4_0.bin MODEL_N_CTX=1000

sandyrs9421 avatar May 24 '23 18:05 sandyrs9421

Did you download and add a model?

christopherpickering avatar May 24 '23 19:05 christopherpickering

@sandyrs: as stated in the README.md you should first download the model file into a models directory in the project root. I encountered your same issue and this worked.

gvilarino avatar May 25 '23 16:05 gvilarino

yes i have downloaded the models and mentioned the same path in env file but still seeing the issue. Screenshot 2023-05-26 at 12 07 20 PM

My ENV file - PERSIST_DIRECTORY=db MODEL_TYPE=GPT4All MODEL_PATH=models/ggml-gpt4all-j-v1.3-groovy.bin EMBEDDINGS_MODEL_NAME='sentence-transformers/all-MiniLM-L6-v2' MODEL_N_CTX=1000

can you please help me on how i can resolve thie error ? @christopherpickering / @gvilarino

sandyrs9421 avatar May 26 '23 06:05 sandyrs9421

Had the same issue. Moving downloaded models into models directory resolved it.

I see that you have MODEL_PATH=models/ggml-gpt4all-j-v1.3-groovy.bin in your .env file, so its likely that the issue is the same for you. Just create models directory and move your downloaded models into it.

albertas avatar May 26 '23 20:05 albertas

@albertas - thanks for the reply i tried the recommended steps and seeing similar error. i am seeing this error when i run the privategpt.py file. Data ingestion was successful. Appreciate if you can guide me wtih possible resolution for this.

My ENV File - PERSIST_DIRECTORY=db MODEL_TYPE=GPT4All MODEL_PATH=models/ggml-gpt4all-j-v1.3-groovy.bin EMBEDDINGS_MODEL_NAME=all-MiniLM-L6-v2 MODEL_N_CTX=1000 TARGET_SOURCE_CHUNKS=4

Screenshot 2023-05-29 at 12 25 06 PM

sandyrs9421 avatar May 29 '23 06:05 sandyrs9421

Similar issue, tried with both putting the model in the .\models subfolder and its own folder inside the .\models subdirectory. The only way I can get it to work is by using the originally listed model, which I'd rather not do as I have a 3090. It's most likely a configuration issue in the .env file, but I'm not 100% sure what all to change if you switch to a different model.

agentmith avatar May 29 '23 08:05 agentmith

I had encounter with same problem. I used absolute path for the model path, it resolved the issue.

ayteakkaya536 avatar May 29 '23 22:05 ayteakkaya536

I had encounter with same problem. I used absolute path for the model path, it resolved the issue.

Thanks. That fixed it for me :-)

Rasmus-Riis avatar May 30 '23 06:05 Rasmus-Riis

For me I used absolute path in "privateGPT.py" previously it was model_path = os.environ.get('MODEL_PATH') I changed it to model_path = "C:/Users/YM/Desktop/PrivateGPT/privateGPT/models/ggml-gpt4all-j-v1.3-groovy.bin"

in .env here is my config:

PERSIST_DIRECTORY=db MODEL_TYPE=GPT4All MODEL_PATH=models/ggml-gpt4all-j-v1.3-groovy.bin EMBEDDINGS_MODEL_NAME=all-MiniLM-L6-v2 MODEL_N_CTX=1000 TARGET_SOURCE_CHUNKS=4

Yousef-Mush avatar Jun 01 '23 05:06 Yousef-Mush

Changing the path in the privateGPT.py also does not fix the issue . Please help

tosundar40 avatar Jul 20 '23 18:07 tosundar40

Any solution found to this yet? Using absolute path (on MacOS) does not fix it for me.

ciathyza avatar Jul 22 '23 06:07 ciathyza

same here on mac.. using the absolute path didn't fix the problem

abcnow avatar Jul 22 '23 12:07 abcnow

If you set model path correctly mentioned in .env file and remove the extra argument n_ctx=1000, Its working as expected.

danielmiranda avatar Jul 22 '23 23:07 danielmiranda