ValueError: Model path does not exist: LAMA_EMBEDDINGS_MODEL=C:/martinezchatgpt/models/ggml-model-q4_0.bin
Persistent error.
(sheld) C:\martinezchatgpt\privateGPT-main\privateGPT-main\source_documents>python -m ingest Loading documents from source_documents Loaded 1 documents from source_documents Split into 0 chunks of text (max. 500 tokens each) Traceback (most recent call last): File "C:\Users\sheld.virtualenvs\sheld-ul37renh\Lib\site-packages\langchain\embeddings\llamacpp.py", line 78, in validate_environment values["client"] = Llama( ^^^^^^ File "C:\Users\sheld.virtualenvs\sheld-ul37renh\Lib\site-packages\llama_cpp\llama.py", line 155, in init raise ValueError(f"Model path does not exist: {model_path}") ValueError: Model path does not exist: LAMA_EMBEDDINGS_MODEL=C:/martinezchatgpt/models/ggml-model-q4_0.bin
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "
We moved away from llama embeddings. Pull the latest changes, install requirements, remove the db folder, and run the ingestion again.
We moved away from llama embeddings. Pull the latest changes, install requirements, remove the
dbfolder, and run the ingestion again.
thank you...