langchain-ask-pdf-local
langchain-ask-pdf-local copied to clipboard
validation error for LlamaCpp __root__ Could not load Llama model from path: ./models/stable-vicuna-13B.ggml.q4_0.bin
I'm getting the following error:
ValidationError: 1 validation error for LlamaCpp root Could not load Llama model from path: ./models/stable-vicuna-13B.ggml.q4_0.bin. Received error (type=value_error)
Updated Langchain & llama-cpp-python to the latest but still the same error. What could be the issue?
There is either something wrong with latest llama-cpp-python or it wasn't updated with latest llama.cpp binary yet. I was able to make it work by manually replacing llama.dll inside llama-cpp-python package with latest one from llama.cpp releases.
I guess at the moment the easiest way would be installing the exact package versions from requirements.txt and getting old LLM model from here https://huggingface.co/TheBloke/stable-vicuna-13B-GGML/tree/previous_llama and waiting for further package updates.