/Console/API/workspaces/current/models/model types/LLM cannot request
Self Checks
- [x] This is only for bug report, if you would like to ask a question, please head to Discussions.
- [x] I have searched for existing issues search for existing issues, including closed ones.
- [x] I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
- [x] [FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
- [x] Please do not modify this template :) and fill in all the required fields.
Dify version
1.0.0
Cloud or Self Hosted
Self Hosted (Docker)
Steps to reproduce
/Console/API/workspaces/current/models/model types/LLM cannot request
After running my system for a period of time, it displayed an exception when accessing a specific application page in the browser. Upon checking the exception link, I found that/console/app/workspaces/current/models/model types/llm did not reach the backend correctly. I observed the Nginx logs and found that the request was not received, and then the local timeout occurred. The Nginx log shows a 499 status code, which is completely inaccessible to the local computer and caused the client to shut down actively, resulting in no traffic reaching the server
✔️ Expected Behavior
No response
❌ Actual Behavior
No response
But some chat assistants can have normal conversations through publicly accessible URLs
The issue you're experiencing with the /Console/API/workspaces/current/models/model types/LLM endpoint not reaching the backend and resulting in a 499 status code in the Nginx logs typically indicates that the client closed the connection before the server could respond. This can happen due to several reasons, such as:
-
Client Timeout: The client might be timing out before the server responds. This could be due to network latency or server processing delays.
-
Nginx Configuration: Check your Nginx configuration for any timeout settings that might be causing the client to close the connection prematurely.
-
Backend Performance: Ensure that the backend service handling this request is performing optimally and not experiencing delays or errors that could lead to timeouts.
-
LLM Provider Key Configuration: The message in the image suggests that the LLM provider's key is not set. Make sure that the API key for the LLM provider is correctly configured in Dify. This process is documented in the "Model Integration Settings" section of the
en/llms.txtfile in the Dify documentation [1].
Since some chat assistants are working normally through publicly accessible URLs, it might be worth comparing the configurations and network conditions between the working and non-working setups to identify any discrepancies.
To continue talking to Dosu, mention @dosu.
Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other
docker restart docker-plugin_daemon-1
Running this command will restore normal operation
Then we had another problem after a while and needed to redo the service
/The console/API/workspaces/current/models/model types/LLM interface is just to retrieve some configuration information. Is it possible that the docker-plugin_daemon-1 service cannot process it after running for a long time?
The docker-plugin_daemon-1 service is responsible for managing plugins, including handling plugin packages and facilitating remote plugin installation [1]. If restarting this service temporarily resolves the issue, it might indicate a problem with resource management or a memory leak within the service that accumulates over time, leading to degraded performance.
While the /Console/API/workspaces/current/models/model types/LLM endpoint is meant to retrieve configuration information, there is no direct documentation indicating that the docker-plugin_daemon-1 service processes requests from this endpoint. However, if the service is indirectly affecting the overall system performance, it could potentially impact other components.
To address this, consider monitoring the resource usage of the docker-plugin_daemon-1 service over time to identify any patterns or spikes in usage. Additionally, reviewing the service's logs for any errors or warnings might provide insights into the underlying issue. If the problem persists, you may need to investigate further into the service's dependencies and configuration settings.
To continue talking to Dosu, mention @dosu.
Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other
I have deployed this system on two servers, and one of them has a large number of users and usage, which can cause problems. The other server is normal if no one is using it, so I guess it is due to excessive traffic. Here, it refers to the number of API service calls to plugin service, so plugin service cannot respond?
You can reinstall the missing model providers in the marketplace, should resolve this problem.
@crazywoola eee...I don't have any missing models, the main issue is that when this phenomenon occurs, all models in the backend cannot be selected. It's all models, but it doesn't affect the dialogue and API calls of the frontend LLM.
The key point is that I reinstalled all the models because I also encountered this problem after upgrading. I deleted all the relevant models under the plugin service, cleared the database tables, and downloaded and installed them again from the market. I don't know where to troubleshoot because it has been running stably for 5 hours since the last failure, probably because the user base is not high now
And this interface is just for obtaining data from chat model providers. It shouldn't not return results. The basic models provided by the provider are all encoded within the plugin. Is it because many plugins are installed that the performance is insufficient? Can't batch pull the model parameters of all plugins? Also, what I understand is that if a plugin cannot return a message in a timely manner, is it enough that the returned model provider data does not include this?
did you manage to fix this @rechardwang ? somehow I always got 500 console/api/workspaces/current/models/model-types/llm that says
{"message": "Internal Server Error", "code": "unknown"}
@devalensio No, I can't solve this problem because my plugin service doesn't have any error messages, and I can't see where anything is going wrong. It's really frustrating
This is a weird error, because I have no clue for this yet.
I am not sure if this is related. https://github.com/langgenius/dify-plugin-daemon/pull/59
@crazywoola I have now added a scheduled restart task to the plugin container, which was restarted within a few hours. It is barely working now, hahaha. I am very sure that it is not a problem with any service in the plugin, because our API is constantly being called, and every major language model will call it. When there is a problem, it does not affect the call, but only affects the front-end page response. I will take a look after you update it, because I cannot change the code.
You can update to latest version 1.0.1 to see if this persists.