Cannot load llama model
Using the one that was specified in the feature guide, does not load and leads to the following error. What am I doing wrong?
For reference, command to start langflow is run in the same folder as the models folder is. I do execute the langflow command outside of C, so don't know if that maybe is the reason (models folder is where the command is executed, not at C)
The error is the following: ValueError: Error building node LlamaCpp: Could not load Llama model from path: ./models/ggml-vicuna-13b-4bit.bin INFO: 127.0.0.1:64396 - "POST /validate/node/dndnode_2 HTTP/1.1" 500 Internal Server Error
I encountered the same error.
Try using the complete path. I used llamacpp model yesterday. I'll give another try and report back.
I encountered the same error as yours a few days ago and I think it was because of model versioning issue.
I then downloaded this one: ggml-vic7b-q4_0.bin and it works now.
I encountered the same error as yours a few days ago and I think it was because of model versioning issue.
I then downloaded this one: ggml-vic7b-q4_0.bin and it works now.
Gonna try this later. If it works, you're a lifesaver!
currently using langflow on my windows laptop.
currently i don't know how to fill the Model Path field, using relative or absolute paths ????
any advice
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
I encountered the same error as yours a few days ago and I think it was because of model versioning issue.