private-gpt icon indicating copy to clipboard operation
private-gpt copied to clipboard

Need help on in some errors

Open dbxndn opened this issue 2 years ago • 2 comments

File "F:\privateGPT\Lib\site-packages\langchain\embeddings\llamacpp.py", line 79, in validate_environment values["client"] = Llama( ^^^^^^ File "F:\privateGPT\Lib\site-packages\llama_cpp\llama.py", line 155, in init
self.ctx = llama_cpp.llama_init_from_file( ^^^^^^^^^^^^^^^^^^^^^

File "F:\privateGPT\Lib\site-packages\llama_cpp\llama_cpp.py", line 182, in llama_init_from_file return _lib.llama_init_from_file(path_model, params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ OSError: [WinError -1073741795] Windows Error 0xc000001d During handling of the above exception, another exception occurred:

File "F:\privateGPT\ingest.py", line 62, in main() File "F:\privateGPT\ingest.py", line 53, in main llama = LlamaCppEmbeddings(model_path=llama_embeddings_model, n_ctx=model_n_ctx)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pydantic\main.py", line 339, in pydantic.main.BaseModel.init File "pydantic\main.py", line 1102, in pydantic.main.validate_model File "F:\privateGPT\Lib\site-packages\langchain\embeddings\llamacpp.py", line 99, in validate_environment raise NameError(f"Could not load Llama model from path: {model_path}") NameError: Could not load Llama model from path: F:/privateGPT/models/ggml-model-q4_0.bin Exception ignored in: <function Llama.del at 0x000002307F085E40> Traceback (most recent call last): File "F:\privateGPT\Lib\site-packages\llama_cpp\llama.py", line 978, in del if self.ctx is not None: ^^^^ AttributeError: 'Llama' object has no attribute 'ctx'

dbxndn avatar May 17 '23 02:05 dbxndn

I have a similar issue on my windows pc

jrfcs avatar May 22 '23 00:05 jrfcs

Follow the instruction in the README. Make sure to set the proper path to your model in your .env file.

PulpCattel avatar May 22 '23 08:05 PulpCattel