Ollama call failed.. InternalServerError
Self Checks
- [x] This is only for bug report, if you would like to ask a question, please head to Discussions.
- [x] I have searched for existing issues search for existing issues, including closed ones.
- [x] I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
- [x] [FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
- [x] Please do not modify this template :) and fill in all the required fields.
Dify version
v1.0.0-beta
Cloud or Self Hosted
Self Hosted (Docker)
Steps to reproduce
Failed to use ollama, and Xinference has the same problem. And I can call it in other applications, excluding the problem of ollama.
ERROR [Dummy-18] [app.py:875] - Exception on /console/api/workspaces/current/model-providers/langgenius/ollama/ollama/models [POST]
✔️ Expected Behavior
The model deployed by ollama should be successfully called
❌ Actual Behavior
2025-02-16 13:06:26 2025-02-16 05:06:26,456.456 ERROR [Dummy-18] [app.py:875] - Exception on /console/api/workspaces/current/model-providers/langgenius/ollama/ollama/models [POST] 2025-02-16 13:06:26 Traceback (most recent call last): 2025-02-16 13:06:26 File "/app/api/.venv/lib/python3.12/site-packages/flask/app.py", line 917, in full_dispatch_request 2025-02-16 13:06:26 rv = self.dispatch_request() 2025-02-16 13:06:26 ^^^^^^^^^^^^^^^^^^^^^^^ 2025-02-16 13:06:26 File "/app/api/.venv/lib/python3.12/site-packages/flask/app.py", line 902, in dispatch_request 2025-02-16 13:06:26 return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) # type: ignore[no-any-return] 2025-02-16 13:06:26 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-02-16 13:06:26 File "/app/api/.venv/lib/python3.12/site-packages/flask_restful/init.py", line 489, in wrapper 2025-02-16 13:06:26 resp = resource(*args, **kwargs) 2025-02-16 13:06:26 ^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-02-16 13:06:26 File "/app/api/.venv/lib/python3.12/site-packages/flask/views.py", line 110, in view 2025-02-16 13:06:26 return current_app.ensure_sync(self.dispatch_request)(**kwargs) # type: ignore[no-any-return] 2025-02-16 13:06:26 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-02-16 13:06:26 File "/app/api/.venv/lib/python3.12/site-packages/flask_restful/init.py", line 604, in dispatch_request 2025-02-16 13:06:26 resp = meth(*args, **kwargs) 2025-02-16 13:06:26 ^^^^^^^^^^^^^^^^^^^^^ 2025-02-16 13:06:26 File "/app/api/controllers/console/wraps.py", line 147, in decorated 2025-02-16 13:06:26 return view(*args, **kwargs) 2025-02-16 13:06:26 ^^^^^^^^^^^^^^^^^^^^^ 2025-02-16 13:06:26 File "/app/api/libs/login.py", line 94, in decorated_view 2025-02-16 13:06:26 return current_app.ensure_sync(func)(*args, **kwargs) 2025-02-16 13:06:26 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-02-16 13:06:26 File "/app/api/controllers/console/wraps.py", line 27, in decorated 2025-02-16 13:06:26 return view(*args, **kwargs) 2025-02-16 13:06:26 ^^^^^^^^^^^^^^^^^^^^^ 2025-02-16 13:06:26 File "/app/api/controllers/console/workspace/models.py", line 154, in post 2025-02-16 13:06:26 model_provider_service.save_model_credentials( 2025-02-16 13:06:26 File "/app/api/services/model_provider_service.py", line 237, in save_model_credentials 2025-02-16 13:06:26 provider_configuration.add_or_update_custom_model_credentials( 2025-02-16 13:06:26 File "/app/api/core/entities/provider_configuration.py", line 399, in add_or_update_custom_model_credentials 2025-02-16 13:06:26 provider_model_record, credentials = self.custom_model_credentials_validate(model_type, model, credentials) 2025-02-16 13:06:26 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-02-16 13:06:26 File "/app/api/core/entities/provider_configuration.py", line 379, in custom_model_credentials_validate 2025-02-16 13:06:26 credentials = model_provider_factory.model_credentials_validate( 2025-02-16 13:06:26 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-02-16 13:06:26 File "/app/api/core/model_runtime/model_providers/model_provider_factory.py", line 185, in model_credentials_validate 2025-02-16 13:06:26 self.plugin_model_manager.validate_model_credentials( 2025-02-16 13:06:26 File "/app/api/core/plugin/manager/model.py", line 137, in validate_model_credentials 2025-02-16 13:06:26 for resp in response: 2025-02-16 13:06:26 ^^^^^^^^ 2025-02-16 13:06:26 File "/app/api/core/plugin/manager/base.py", line 189, in _request_with_plugin_daemon_response_stream 2025-02-16 13:06:26 self._handle_plugin_daemon_error(error.error_type, error.message) 2025-02-16 13:06:26 File "/app/api/core/plugin/manager/base.py", line 223, in _handle_plugin_daemon_error 2025-02-16 13:06:26 raise PluginDaemonInternalServerError(description=message) 2025-02-16 13:06:26 core.plugin.manager.exc.PluginDaemonInternalServerError: PluginDaemonInternalServerError: no available node, plugin not foun
@GinHOW Hello there! I'm here to assist you with any bugs, questions, or contributions you may have. Let's work together to resolve the issue you're experiencing.
I found a similar issue related to the Ollama model in a self-hosted Dify setup. The problem often arises when the Ollama service is not accessible from the Docker container, leading to an Internal Server Error. This can happen if localhost is used, as it refers to the container itself rather than the host machine or other containers.
To resolve this issue, you should expose the Ollama service to the network by replacing localhost with host.docker.internal in the service URL:
http://host.docker.internal:11434
Additionally, ensure that the Ollama service is configured to be accessible from all IP addresses by setting the OLLAMA_HOST environment variable to 0.0.0.0. Depending on your operating system, you may need to set this environment variable differently:
- Mac: Use
launchctl setenv OLLAMA_HOST "0.0.0.0"and restart the Ollama application. - Linux: Edit the
ollama.servicefile to includeEnvironment="OLLAMA_HOST=0.0.0.0"under the[Service]section, then reload and restart the service. - Windows: Set the
OLLAMA_HOSTenvironment variable through the system environment variables settings, then restart Ollama.
These steps should help ensure that the Ollama service is properly exposed and accessible from your Docker setup [1].
To continue talking to Dosu, mention @dosu.
Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other
It is not an issue with the ollama address. It should be a problem with docker-plugin_daemon-1.
The following is the internal error of docker-plugin_daemon-1:
025-02-16 13:25:12 2025/02/16 05:25:12 /app/internal/db/init.go:17 2025-02-16 13:25:12 [error] failed to initialize database, got error failed to connect to
host=db user=postgres database=postgres: dial error (dial tcp 172.19.0.5:5432: connect: connection refused) 2025-02-16 13:25:12 2025/02/16 05:25:12 init.go:103: [PANIC]failed to init dify plugin db: failed to connect tohost=db user=postgres database=postgres: dial error (dial tcp 172.19.0.5:5432: connect: connection refused) 2025-02-16 13:25:13 2025/02/16 05:25:13 pool.go:32: [INFO]init routine pool, size: 10000
It is not an issue with the ollama address. It should be a problem with docker-plugin_daemon-1.
这不是 ollama 地址的问题。应该是 docker-plugin_daemon-1 的问题。
The following is the internal error of docker-plugin_daemon-1:
以下是 docker-plugin_daemon-1 的内部错误:
025-02-16 13:25:12 2025/02/16 05:25:12 /app/internal/db/init.go:17 2025-02-16 13:25:12 [error] failed to initialize database, got error failed to connect to
host=db user=postgres database=postgres: dial error (dial tcp 172.19.0.5:5432: connect: connection refused)2025-02-16 13:25:12 [错误] 初始化数据库失败,出现错误无法连接到host=db user=postgres database=postgres: 拨号错误 (拨号 tcp 172.19.0.5:5432: 连接被拒绝) 2025-02-16 13:25:12 2025/02/16 05:25:12 init.go:103: [PANIC]failed to init dify plugin db: failed to connect tohost=db user=postgres database=postgres: dial error (dial tcp 172.19.0.5:5432: connect: connection refused)2025-02-16 13:25:12 2025/02/16 05:25:12 init.go:103: [PANIC]初始化 dify 插件数据库失败:连接到host=db user=postgres database=postgres失败:拨号错误(拨号 tcp 172.19.0.5:5432:连接:连接被拒绝) 2025-02-16 13:25:13 2025/02/16 05:25:13 pool.go:32: [INFO]init routine pool, size: 100002025-02-16 13:25:13 2025/02/16 05:25:13 pool.go:32: [INFO]初始化例程池,大小:10000
Hi there, I also encountered the same problem. May I ask if you have solved it, and if so, what is the solution?
The database name should be dify, right?
确实是出现了这个错误,0.15版本没有问题,用了1.0之后,Ollama连接 docker里面的8080端口?出现 Internal error,似乎是docker里面的网站服务出现了问题。
+1
+1
I solved this problem by modifying the docker-compose.yaml file,change all dify-api and dify-web version to 0.15.3,then I can add the ollama models。This idea comes from the page https://blog.51cto.com/u_13563176/13390099。
我通过修改docker-compose.yaml文件解决了这个问题,将dify-api和dify-web版本全部改为0.15.3,然后就可以添加ollama模型了。这个思路来自于https://blog.51cto.com/u_13563176/13390099 。
改成0.15.3后dify无法初始化,无法出现登录界面。
确实是出现了这个错误,0.15版本没有问题,用了1.0之后,Ollama连接 docker里面的8080端口?出现 Internal error,似乎是docker里面的网站服务出现了问题。
请问大佬是如何解决的。
+1
Hello, there. I have been facing this problem for these days. At first I successfully installed llama3.2:latest working on my local Ollama. But on the next day, llama3.2 disappears in my model list in my dify settings and happened this when I tried to add llama3.2 to my model list again.
I saw the article above downgrading docker-api-1 and docker-web-1 to 0.15.3 and I tried it out. Then I found that I couldn't login to Dify anymore on http://localhost/install. The blue icon is revolving eternally. Of course I remove all cache and cookies from my browser.
In background, I found network error saying
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.27.4</center>
</body>
</html>
by http://localhost/console/api/setup.
Should I remove everything and retry re-install all again? Does somebody have any solutions?