private-gpt
private-gpt copied to clipboard
Not able to run "GPT4All 13B snoozy"
I am not able to run the "GPT4All 13B snoozy" the file is downloaded from the https://gpt4all.io/models/ggml-gpt4all-l13b-snoozy.bin I am getting
gptj_model_load: loading model from 'models/ggml-gpt4all-l13b-snoozy.bin' - please wait ... gptj_model_load: invalid model file 'models/ggml-gpt4all-l13b-snoozy.bin' (bad magic)
Yeah same, including ggml of MPT-7B-Chat
same here
@PulpCattel Can you paste your .env, along with the path from which you have downloaded the model.
Something like this:
PERSIST_DIRECTORY=db
MODEL_TYPE=LlamaCpp
MODEL_PATH=models/ggml-gpt4all-l13b-snoozy.bin
EMBEDDINGS_MODEL_NAME=all-MiniLM-L6-v2
MODEL_N_CTX=1000
The model was download from the same website linked in the README.
Got it, so you have changed the MODEL_TYPE. Then it is working. Thanks.
To me works also if you change the backend=
as shown in the link above, but that involves touching code so it might not be preferred by some. See also the other link I posted above, you may have to update/downgrade the Llama library to work with different models (they introduced a breaking change).
You were right some more things will have to be changes. They have introduced braking changes 2 times this month.
In any case I tried with
langchain==0.0.171 pygpt4all==1.1.0 chromadb==0.3.23 llama-cpp-python==0.1.50 urllib3==2.0.2 pdfminer.six==20221105 python-dotenv==1.0.0 unstructured==0.6.6 extract-msg==0.41.1 tabulate==0.9.0 pandoc==2.3 pypandoc==1.11 tqdm==4.65.0 gptcache==0.1.22
MODEL_TYPE=LlamaCpp MODEL_PATH=./model/ggml-gpt4all-l13b-snoozy.bin
Downloaded from: https://gpt4all.io/models/ggml-gpt4all-l13b-snoozy.bin
Didnt work. So it seems somethings are not matching, as I am getting "error loading model: this format is no longer supported"
Not sure what will work?
llama-cpp-python==0.1.50
This version of the library no longer supports that model. See this https://github.com/imartinez/privateGPT/issues/220#issuecomment-1550376561 (it's the same I mentioned above), you have to downgrade your library version (use a virtualenv for convenience if you can).
I am not able to run the "GPT4All 13B snoozy" the file is downloaded from the https://gpt4all.io/models/ggml-gpt4all-l13b-snoozy.bin I am getting
gptj_model_load: loading model from 'models/ggml-gpt4all-l13b-snoozy.bin' - please wait ... gptj_model_load: invalid model file 'models/ggml-gpt4all-l13b-snoozy.bin' (bad magic)
Try and update your .env file FROM MODEL_TYPE=GPT4All TO MODEL_TYPE=LlamaCpp