private-gpt
private-gpt copied to clipboard
OSError: [WinError -1073741795] Windows Error 0xc000001d And AttributeError: 'Llama' object has no attribute 'ctx'
D:\CursorFile\Python\privateGPT-main>python ingest.py Loading documents from source_documents Loaded 2 documents from source_documents Split into 603 chunks of text (max. 500 tokens each) llama.cpp: loading model from D:\CursorFile\Python\privateGPT-main\models\ggml-model-q4_0.bin Traceback (most recent call last): File "D:\anaconda3\lib\site-packages\langchain\embeddings\llamacpp.py", line 78, in validate_environment values["client"] = Llama( File "D:\anaconda3\lib\site-packages\llama_cpp\llama.py", line 155, in init self.ctx = llama_cpp.llama_init_from_file( File "D:\anaconda3\lib\site-packages\llama_cpp\llama_cpp.py", line 182, in llama_init_from_file return _lib.llama_init_from_file(path_model, params) OSError: [WinError -1073741795] Windows Error 0xc000001d
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\CursorFile\Python\privateGPT-main\ingest.py", line 62, in
When I run python ingest.py
, I got these two error...
The script tries to load models from the "models" directory in the code folder. You just need to download the model and put it in the directory. https://gpt4all.io/models/ggml-gpt4all-j-v1.3-groovy.bin https://huggingface.co/Pi3141/alpaca-native-7B-ggml/resolve/397e872bf4c83f4c642317a5bf65ce84a105786e/ggml-model-q4_0.bin
Above are the model links to download, present in the readme of the project.
I've done this, and its still giving me the error.
The models work with GPT4all, so it must be the privateGPT code.
Getting "OSError: [WinError -1073741795] Windows Error 0xc000001d" also. My next step was to validate the LLMs. I'll try GPT4all to validate my downloaded LLMs.
The script tries to load models from the "models" directory in the code folder. You just need to download the model and put it in the directory. https://gpt4all.io/models/ggml-gpt4all-j-v1.3-groovy.bin https://huggingface.co/Pi3141/alpaca-native-7B-ggml/resolve/397e872bf4c83f4c642317a5bf65ce84a105786e/ggml-model-q4_0.bin
Above are the model links to download, present in the readme of the project.
I have downloaded those models and put them in 'models' folder. This error still happened.
Have you found a solution? I am having the same problem
Same problem when running privateGPT.py : .... File ".../AppData\Local\Programs\Python\Python311\Lib\site-packages\gpt4all\pyllmodel.py", line 141, in load_model llmodel.llmodel_loadModel(self.model, model_path.encode('utf-8')) OSError: [WinError -1073741795] Windows Error 0xc000001d
Running Python 3.11 under Windows 10 x64, everything installed with default settings conforming to the tutorial Any idea for a fix?
Getting "OSError: [WinError -1073741795] Windows Error 0xc000001d" also. My next step was to validate the LLMs. I'll try GPT4all to validate my downloaded LLMs.
What do you mean by "validate the LLMs"?
I also have those windows errors with the version of gpt4all which does not cause the verification errors right away. So I assume this is the version which should work.
.conda\envs\gpt\lib\site-packages\gpt4all\pyllmodel.py", line 141, in load_model llmodel.llmodel_loadModel(self.model, model_path.encode('utf-8')) OSError: [WinError -1073741795] Windows Error 0xc000001d
Trying sfc /scannow right now. You never know...
edit: nope
https://stackoverflow.com/questions/53057591/importerror-dll-load-failed-with-error-code-1073741795 talks about old processors and downgrading tensorflow. We don't have tensorflow here, but still... I have a 10-year-old xeon. Not sure what to try out to fix it, though.
I installed gpt4all and the model downloader there issued several warnings that the bigger models need more RAM than I have. I am just guessing here - but could some windows errors occur because the model is simply using up all the RAM?
EDIT: The groovy-model is not maxing out the RAM. So it is not likely to be the problem here.
Interestingly, gpt4all also only reports "model loading error". No matter which one I try. I suspect there is something terminally messed up with Windows and I guess I will give up now. GPT apparently simply does not run here.
Edit: Yes, the "here" is the point. It works on the Core i7-5500, but not on the Xeon W3690. You might want to try different CPUs, just in case. It does seem to matter.