private-gpt icon indicating copy to clipboard operation
private-gpt copied to clipboard

OSError: [WinError -1073741795] Windows Error 0xc000001d And AttributeError: 'Llama' object has no attribute 'ctx'

Open immmor opened this issue 1 year ago • 9 comments

D:\CursorFile\Python\privateGPT-main>python ingest.py Loading documents from source_documents Loaded 2 documents from source_documents Split into 603 chunks of text (max. 500 tokens each) llama.cpp: loading model from D:\CursorFile\Python\privateGPT-main\models\ggml-model-q4_0.bin Traceback (most recent call last): File "D:\anaconda3\lib\site-packages\langchain\embeddings\llamacpp.py", line 78, in validate_environment values["client"] = Llama( File "D:\anaconda3\lib\site-packages\llama_cpp\llama.py", line 155, in init self.ctx = llama_cpp.llama_init_from_file( File "D:\anaconda3\lib\site-packages\llama_cpp\llama_cpp.py", line 182, in llama_init_from_file return _lib.llama_init_from_file(path_model, params) OSError: [WinError -1073741795] Windows Error 0xc000001d

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "D:\CursorFile\Python\privateGPT-main\ingest.py", line 62, in main() File "D:\CursorFile\Python\privateGPT-main\ingest.py", line 53, in main llama = LlamaCppEmbeddings(model_path=llama_embeddings_model, n_ctx=model_n_ctx) File "pydantic\main.py", line 339, in pydantic.main.BaseModel.init File "pydantic\main.py", line 1102, in pydantic.main.validate_model File "D:\anaconda3\lib\site-packages\langchain\embeddings\llamacpp.py", line 98, in validate_environment raise NameError(f"Could not load Llama model from path: {model_path}") NameError: Could not load Llama model from path: D:\CursorFile\Python\privateGPT-main\models\ggml-model-q4_0.bin Exception ignored in: <function Llama.del at 0x0000017F4795CAF0> Traceback (most recent call last): File "D:\anaconda3\lib\site-packages\llama_cpp\llama.py", line 978, in del if self.ctx is not None: AttributeError: 'Llama' object has no attribute 'ctx'

immmor avatar May 14 '23 08:05 immmor

When I run python ingest.py, I got these two error...

immmor avatar May 14 '23 08:05 immmor

The script tries to load models from the "models" directory in the code folder. You just need to download the model and put it in the directory. https://gpt4all.io/models/ggml-gpt4all-j-v1.3-groovy.bin https://huggingface.co/Pi3141/alpaca-native-7B-ggml/resolve/397e872bf4c83f4c642317a5bf65ce84a105786e/ggml-model-q4_0.bin

Above are the model links to download, present in the readme of the project.

nikhilsingh291 avatar May 14 '23 16:05 nikhilsingh291

I've done this, and its still giving me the error.

redfort1987 avatar May 15 '23 10:05 redfort1987

The models work with GPT4all, so it must be the privateGPT code.

redfort1987 avatar May 15 '23 12:05 redfort1987

Getting "OSError: [WinError -1073741795] Windows Error 0xc000001d" also. My next step was to validate the LLMs. I'll try GPT4all to validate my downloaded LLMs.

mrharrison007 avatar May 17 '23 01:05 mrharrison007

The script tries to load models from the "models" directory in the code folder. You just need to download the model and put it in the directory. https://gpt4all.io/models/ggml-gpt4all-j-v1.3-groovy.bin https://huggingface.co/Pi3141/alpaca-native-7B-ggml/resolve/397e872bf4c83f4c642317a5bf65ce84a105786e/ggml-model-q4_0.bin

Above are the model links to download, present in the readme of the project.

I have downloaded those models and put them in 'models' folder. This error still happened.

immmor avatar May 18 '23 12:05 immmor

Have you found a solution? I am having the same problem

planerboy avatar May 29 '23 16:05 planerboy

Same problem when running privateGPT.py : .... File ".../AppData\Local\Programs\Python\Python311\Lib\site-packages\gpt4all\pyllmodel.py", line 141, in load_model llmodel.llmodel_loadModel(self.model, model_path.encode('utf-8')) OSError: [WinError -1073741795] Windows Error 0xc000001d

Running Python 3.11 under Windows 10 x64, everything installed with default settings conforming to the tutorial Any idea for a fix?

LaszloA avatar Jun 08 '23 14:06 LaszloA

Getting "OSError: [WinError -1073741795] Windows Error 0xc000001d" also. My next step was to validate the LLMs. I'll try GPT4all to validate my downloaded LLMs.

What do you mean by "validate the LLMs"?

LaszloA avatar Jun 08 '23 14:06 LaszloA

I also have those windows errors with the version of gpt4all which does not cause the verification errors right away. So I assume this is the version which should work.

.conda\envs\gpt\lib\site-packages\gpt4all\pyllmodel.py", line 141, in load_model llmodel.llmodel_loadModel(self.model, model_path.encode('utf-8')) OSError: [WinError -1073741795] Windows Error 0xc000001d

Trying sfc /scannow right now. You never know...

edit: nope

bones0 avatar Sep 07 '23 21:09 bones0

https://stackoverflow.com/questions/53057591/importerror-dll-load-failed-with-error-code-1073741795 talks about old processors and downgrading tensorflow. We don't have tensorflow here, but still... I have a 10-year-old xeon. Not sure what to try out to fix it, though.

bones0 avatar Sep 07 '23 22:09 bones0

I installed gpt4all and the model downloader there issued several warnings that the bigger models need more RAM than I have. I am just guessing here - but could some windows errors occur because the model is simply using up all the RAM?

EDIT: The groovy-model is not maxing out the RAM. So it is not likely to be the problem here.

bones0 avatar Sep 08 '23 11:09 bones0

Interestingly, gpt4all also only reports "model loading error". No matter which one I try. I suspect there is something terminally messed up with Windows and I guess I will give up now. GPT apparently simply does not run here.

Edit: Yes, the "here" is the point. It works on the Core i7-5500, but not on the Xeon W3690. You might want to try different CPUs, just in case. It does seem to matter.

bones0 avatar Sep 08 '23 11:09 bones0