langchain-ask-pdf-local icon indicating copy to clipboard operation
langchain-ask-pdf-local copied to clipboard

validation error for LlamaCpp __root__ Could not load Llama model from path: ./models/stable-vicuna-13B.ggml.q4_0.bin

Open jimmathew999 opened this issue 2 years ago • 1 comments

I'm getting the following error:

ValidationError: 1 validation error for LlamaCpp root Could not load Llama model from path: ./models/stable-vicuna-13B.ggml.q4_0.bin. Received error (type=value_error)

Updated Langchain & llama-cpp-python to the latest but still the same error. What could be the issue?

jimmathew999 avatar May 21 '23 19:05 jimmathew999

There is either something wrong with latest llama-cpp-python or it wasn't updated with latest llama.cpp binary yet. I was able to make it work by manually replacing llama.dll inside llama-cpp-python package with latest one from llama.cpp releases.

I guess at the moment the easiest way would be installing the exact package versions from requirements.txt and getting old LLM model from here https://huggingface.co/TheBloke/stable-vicuna-13B-GGML/tree/previous_llama and waiting for further package updates.

wafflecomposite avatar May 21 '23 20:05 wafflecomposite