langflow
langflow copied to clipboard
OllamaEmbeddings Hardcode the model cannot change from llama2
Describe the bug I used the OllamaEmbeddings in my Flow, and I changed the model to "mxbai-embed-large" when I compile the Flow I got the following error:
ValueError: Error building node Qdrant(ID:Qdrant-hOQcv): Error raised by inference API HTTP code: 404, {"error":"model 'llama2' not found, try pulling it first"}
Looking at the source code of the OllamaEmbeddings node, I think it is hardcoded to use llama2
Browser and Version
- Browser Microsoft Edge
- Version 123.0.2420.81 (Official build) (64-bit) First Example RAG.json
To Reproduce Steps to reproduce the behavior:
- Run ollama on your local machine
- pull model mxbai-embed-large, (and as option mistral)
- in Langflow ui, create new project,
- import the following attached First Example RAG.json
- Change the host and port of your ollama in OllamaEmbeddings node
- Try to compile
Screenshots
Additional context I run langflow throw Docker with the latest image version I pulled 04/13/2024
Hey, this is the revised draft for the issue. You can paste it into a custom component and test it yourself. (Your version of langflow must be v1 or higher.)
https://github.com/langflow-ai/langflow/pull/1703
Hello, Sorry for the delay. Did you try using the new version? Does the error still persist?
Hi @mrabbah
We hope you're doing well. Just a friendly reminder that if we do not hear back from you within the next 3 days, we will close this issue. If you need more time or further assistance, please let us know.
Thank you for your understanding!
Thank you for your contribution! This issue will be closed. If you have any questions or encounter another problem, please open a new issue and we will be ready to assist you.