private-gpt
private-gpt copied to clipboard
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__ pydantic.error_wrappers.ValidationError: 1 validation error for LlamaCppEmbeddings
i am seeing below error when i run the ingest.py. any thoughts on how i can resolve it ? kindly advise
Error -
error loading model: this format is no longer supported (see https://github.com/ggerganov/llama.cpp/pull/1305)
llama_init_from_file: failed to load model
Traceback (most recent call last):
File "/Users/FBT/Desktop/Projects/privategpt/privateGPT/ingest.py", line 39, in
My ENV file - PERSIST_DIRECTORY=db MODEL_TYPE=GPT4All MODEL_PATH=models/ggml-gpt4all-j-v1.3-groovy.bin EMBEDDINGS_MODEL_NAME=/Users/FBT/Desktop/Projects/privategpt/privateGPT/models/ggml-model-q4_0.bin MODEL_N_CTX=1000
Did you download and add a model?
@sandyrs: as stated in the README.md
you should first download the model file into a models
directory in the project root. I encountered your same issue and this worked.
yes i have downloaded the models and mentioned the same path in env file but still seeing the issue.
My ENV file - PERSIST_DIRECTORY=db MODEL_TYPE=GPT4All MODEL_PATH=models/ggml-gpt4all-j-v1.3-groovy.bin EMBEDDINGS_MODEL_NAME='sentence-transformers/all-MiniLM-L6-v2' MODEL_N_CTX=1000
can you please help me on how i can resolve thie error ? @christopherpickering / @gvilarino
Had the same issue. Moving downloaded models into models
directory resolved it.
I see that you have MODEL_PATH=models/ggml-gpt4all-j-v1.3-groovy.bin
in your .env file, so its likely that the issue is the same for you. Just create models
directory and move your downloaded models into it.
@albertas - thanks for the reply i tried the recommended steps and seeing similar error. i am seeing this error when i run the privategpt.py file. Data ingestion was successful. Appreciate if you can guide me wtih possible resolution for this.
My ENV File - PERSIST_DIRECTORY=db MODEL_TYPE=GPT4All MODEL_PATH=models/ggml-gpt4all-j-v1.3-groovy.bin EMBEDDINGS_MODEL_NAME=all-MiniLM-L6-v2 MODEL_N_CTX=1000 TARGET_SOURCE_CHUNKS=4
Similar issue, tried with both putting the model in the .\models subfolder and its own folder inside the .\models subdirectory. The only way I can get it to work is by using the originally listed model, which I'd rather not do as I have a 3090. It's most likely a configuration issue in the .env file, but I'm not 100% sure what all to change if you switch to a different model.
I had encounter with same problem. I used absolute path for the model path, it resolved the issue.
I had encounter with same problem. I used absolute path for the model path, it resolved the issue.
Thanks. That fixed it for me :-)
For me I used absolute path in "privateGPT.py"
previously it was
model_path = os.environ.get('MODEL_PATH')
I changed it to
model_path = "C:/Users/YM/Desktop/PrivateGPT/privateGPT/models/ggml-gpt4all-j-v1.3-groovy.bin"
in .env here is my config:
PERSIST_DIRECTORY=db MODEL_TYPE=GPT4All MODEL_PATH=models/ggml-gpt4all-j-v1.3-groovy.bin EMBEDDINGS_MODEL_NAME=all-MiniLM-L6-v2 MODEL_N_CTX=1000 TARGET_SOURCE_CHUNKS=4
Changing the path in the privateGPT.py also does not fix the issue . Please help
Any solution found to this yet? Using absolute path (on MacOS) does not fix it for me.
same here on mac.. using the absolute path didn't fix the problem
If you set model path correctly mentioned in .env file and remove the extra argument n_ctx=1000, Its working as expected.