add ollama error
Self Checks
- [x] This is only for bug report, if you would like to ask a question, please head to Discussions.
- [x] I have searched for existing issues search for existing issues, including closed ones.
- [x] I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
- [x] [FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
- [x] Please do not modify this template :) and fill in all the required fields.
Dify version
1.0.0-beta
Cloud or Self Hosted
Self Hosted (Source)
Steps to reproduce
When adding the ollama configuration in the model supplier interface, the ollama is set to local 192.168.xx.xx: 11434, and the backend will prompt an error message
✔️ Expected Behavior
add ollama success
❌ Actual Behavior
the backend will prompt an error message
I found a similar open issue and discussions related to the error when adding the Ollama model configuration:
-
Error when adding ollama model - This discussion suggests upgrading Ollama to the latest version and using the new endpoint
api/embedas the old oneapi/embedingshas been deprecated. It also recommends ensuring the Ollama service is running and accessible, setting theOLLAMA_HOSTenvironment variable to0.0.0.0, using the correct IP address for Docker, and adjusting firewall settings to allow incoming connections on port 11434. -
Can't add the Ollama Models - This discussion might provide additional insights or similar experiences from other users.
-
Help: I got an error user ollama with model llava-phi3 - Another discussion that might be relevant to your issue.
For resolving the error message when adding the Ollama configuration with the local address 192.168.xx.xx:11434, ensure that the Ollama service is properly exposed and accessible from the network. You can follow these steps:
- Set the
OLLAMA_HOSTenvironment variable to0.0.0.0to bind it to all network interfaces. - If using Docker, replace
localhostwithhost.docker.internalin your service configuration. - Ensure the network configuration allows access to the specified IP and port.
These steps should help resolve the connection error and allow you to successfully add the Ollama configuration [1][2].
To continue talking to Dosu, mention @dosu.
Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other
Web console error message
Who can help me?
Who can help me?
If Dify is deployed using Docker, consider using the local network IP address, http://host.docker.internal:11434 to access the service.
ref: https://docs.dify.ai/development/models-integration/ollama
p.s. i met the same problem last day, and solved it this way
After yesterday's update, the model was still added. Now there is an error like this in the picture, and adding the API will also fail
Did you fix it? I'm having the same issue.
My Ollama service and the Dify Docker are not on the same machine. I have ensured that Ollama's port 11434 is accessible from the Docker server
same error!
Latest update: I found that after restarting Docker Compose, it started working properly. Version 1.0.0 isn't very stable.