langflow icon indicating copy to clipboard operation
langflow copied to clipboard

Cannot load llama model

Open MatchTerm opened this issue 2 years ago • 5 comments

Using the one that was specified in the feature guide, does not load and leads to the following error. What am I doing wrong?

For reference, command to start langflow is run in the same folder as the models folder is. I do execute the langflow command outside of C, so don't know if that maybe is the reason (models folder is where the command is executed, not at C)

The error is the following: ValueError: Error building node LlamaCpp: Could not load Llama model from path: ./models/ggml-vicuna-13b-4bit.bin INFO: 127.0.0.1:64396 - "POST /validate/node/dndnode_2 HTTP/1.1" 500 Internal Server Error

MatchTerm avatar May 02 '23 08:05 MatchTerm

I encountered the same error.

roscoevanderboom avatar May 28 '23 15:05 roscoevanderboom

Try using the complete path. I used llamacpp model yesterday. I'll give another try and report back.

ogabrielluiz avatar May 28 '23 18:05 ogabrielluiz

image

I encountered the same error as yours a few days ago and I think it was because of model versioning issue.

I then downloaded this one: ggml-vic7b-q4_0.bin and it works now.

ogabrielluiz avatar May 28 '23 19:05 ogabrielluiz

image I encountered the same error as yours a few days ago and I think it was because of model versioning issue.

I then downloaded this one: ggml-vic7b-q4_0.bin and it works now.

Gonna try this later. If it works, you're a lifesaver!

MatchTerm avatar May 28 '23 21:05 MatchTerm

currently using langflow on my windows laptop.

currently i don't know how to fill the Model Path field, using relative or absolute paths ????

any advice

Namec999 avatar Jun 05 '23 13:06 Namec999

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

stale[bot] avatar Jul 20 '23 15:07 stale[bot]