[Feature Request]: run ollama on localhost, don’t run in Docker.
Is there an existing issue for the same feature request?
- [x] I have checked the existing issues.
Is your feature request related to a problem?
Describe the feature you'd like
I love this project very much, Dify is a US AI platform, ragflow would be the best China AI platform I believe. this is a convince feature, you know, every developer or user run ollama directly on localhost, like local Macbook or other kinds of laptop. As the document of ragflow now - https://github.com/infiniflow/ragflow/blob/main/docs/guides/deploy_local_llm.mdx , it guide users to run ollama in Docker, could you consider to run the ollama directly on localhost computer.
Describe implementation you've considered
No matter where Ollama run, as long as the network is accessible, RAGFlow can utilize it anyway.
No matter where Ollama run, as long as the network is accessible, RAGFlow can utilize it anyway.
Good to know, when I setup the LLM via ollama on localhost local machine, like the configue is: embedding / deepseek-r1:14b / http://host.docker.internal:11434, an error popup as below on the page of https://demo.ragflow.io/user-setting/model :
**hint : 102
Fail to access embedding model(deepseek-r1:14b).[Errno -2] Name or service not known**
could you kindly help?
No matter where Ollama run, as long as the network is accessible, RAGFlow can utilize it anyway.
Good to know, when I setup the LLM via ollama on localhost local machine, like the configue is: embedding / deepseek-r1:14b / http://host.docker.internal:11434, an error popup as below on the page of https://demo.ragflow.io/user-setting/model :
**hint : 102 Fail to access embedding model(deepseek-r1:14b).[Errno -2] Name or service not known**could you kindly help?
这个不是嵌入模型
demo.ragflow.io and host.docker.internal:11434 are not pingable.
got you, will install ragflow in my local env. Please help to close this issue. Thank you~