single LLM node OK to run but failed if connected to the start node
Self Checks
- [x] This is only for bug report, if you would like to ask a question, please head to Discussions.
- [x] I have searched for existing issues search for existing issues, including closed ones.
- [x] I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
- [x] [FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
- [x] Please do not modify this template :) and fill in all the required fields.
Dify version
1.0.0
Cloud or Self Hosted
Self Hosted (Docker)
Steps to reproduce
I upgraded the version from 0.15.3 to 1.0.0. It's OK if I test run the LLM node, but if I connect the start node with the LLM node, I meet the following error: Run failed: PluginInvokeError: {"args":{},"error_type":"ConnectionError","message":"HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /gpt-2/encodings/main/vocab.bpe (Caused by NewConnectionError('\u003curllib3.connection.HTTPSConnection object at 0x7fea36f25ca0\u003e: Failed to establish a new connection: [Errno 111] Connection refused'))"}
✔️ Expected Behavior
No response
❌ Actual Behavior
No response
@LumingSun Hello! I'm here to help you with bug reports, questions, and becoming a contributor. I can assist you while waiting for a human maintainer.
A similar issue was discussed in the thread titled "Node Code run failed" on the Dify GitHub discussions, but it remains unsolved [1].
The error you're encountering, PluginInvokeError with a ConnectionError, suggests a problem with establishing an HTTPS connection to OpenAI's public blob storage. This could be due to network configuration issues, such as firewall settings or proxy configurations, especially since you're using a self-hosted Docker setup.
To troubleshoot this issue, consider the following steps:
-
Network Access: Ensure that your Docker environment has internet access and that there are no firewall rules blocking outbound connections to the required URLs.
-
Proxy Settings: If your environment requires a proxy to access the internet, make sure that the proxy settings are correctly configured. Dify's latest versions handle proxy settings automatically, but you may need to verify that your setup aligns with the recommended configurations [2].
-
Retry Settings: Check the retry settings for HTTPS connections. Dify uses a default retry mechanism with a backoff strategy. You might want to adjust these settings if network instability is an issue [3].
-
Error Handling: Utilize Dify's error handling mechanisms to manage
PluginInvokeErrorand related connection errors. This can help in identifying specific issues and implementing custom responses [4].
These steps should help in diagnosing and potentially resolving the connection error you're facing.
To continue talking to Dosu, mention @dosu.
Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other
@dosu Why connecting OpenAI's public blob storage? I didn't use any OpenAI service.
The connection to OpenAI's public blob storage might be occurring due to default configurations or dependencies within Dify that attempt to access OpenAI resources. Specifically, the environment variable OPENAI_API_BASE is set to https://api.openai.com/v1 by default, which could lead to unintended connections if not explicitly configured otherwise [1].
Additionally, certain components or models within Dify might be pre-configured to interact with OpenAI services, even if you haven't explicitly set them up. It would be beneficial to review your configuration files and ensure that any references to OpenAI services are removed or replaced with the intended services you wish to use.
To continue talking to Dosu, mention @dosu.
Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other
@dosu Please offer me a domestic mirror address of OPENAI_API_BASE available in China.
The domestic mirror address for OPENAI_API_BASE that is available in China is https://api.lingyiwanwu.com [1].
To continue talking to Dosu, mention @dosu.
Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other
@dosu I replaced the OPENAI_API_BASE with zhipu API, but the error remains.
Ensure that all configurations and dependencies related to OpenAI are fully replaced or removed. Here are some steps to consider:
-
Environment Variables: Double-check that all environment variables related to OpenAI, such as
OPENAI_API_KEY,OPENAI_API_BASE, andOPENAI_API_ORGANIZATION, are either removed or replaced with the corresponding Zhipu configurations. -
Configuration Files: Review all configuration files to ensure there are no lingering references to OpenAI services. This includes checking for any default settings that might still point to OpenAI.
-
Dependencies: Verify that any dependencies or packages specifically related to OpenAI are not being used. This might involve checking the
pyproject.tomlor similar files for any OpenAI-specific packages. -
Model Provider Settings: In Dify, navigate to Settings > Model Providers and ensure that Zhipu is correctly configured as the model provider. Make sure the API key and other necessary details for Zhipu are correctly entered.
-
Error Logs: Check the logs for any specific error messages that might indicate why the connection attempt is still being made to OpenAI. This can provide clues on what might still be misconfigured.
By thoroughly checking these areas, you should be able to identify and resolve any remaining issues related to the connection error.
To continue talking to Dosu, mention @dosu.
Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other
you can try this #15011
@dosu I replaced the OPENAI_API_BASE with zhipu API, but the error remains.
@LumingSun Has it been resolved? It has been bothering me for several days now?
me to core.plugin.manager.exc.PluginInvokeError: PluginInvokeError: {"args":{},"error_type":"ConnectionError","message":"HTTPSConnectionPool(host='openaipublic.blob.core.windows.net' , port=443): Max retries exceeded with url: /gpt-2/encodings/main/vocab.bpe (Caused by NewConnectionError('\u003curllib3.connection.HTTPSConnection object at 0x7fa83b9839e0\u00 3e: Failed to establish a new connection: [Errno 113] No route to host'))"}
After plugin_daemon using a proxy it can does work
However, a problem was found. When uploading documents to the knowledge base, the Preview Chunk speed was very slow and the preview results could not be displayed.
Hi, @LumingSun. I'm Dosu, and I'm helping the Dify team manage their backlog. I'm marking this issue as stale.
Issue Summary:
- The issue involves a "PluginInvokeError" due to a "ConnectionError" after upgrading from Dify 0.15.3 to 1.0.0.
- The error occurs when connecting a single LLM node to the start node in a self-hosted Docker setup.
- Suggested checks include network access, proxy settings, and configuration files for references to OpenAI services.
- Despite replacing
OPENAI_API_BASEwith a Zhipu API, the error persists. - Other users, like @jesusLove and @PaddleOCRCore, report similar issues, with a noted workaround involving a proxy.
Next Steps:
- Please confirm if this issue is still relevant to the latest version of the Dify repository by commenting on this issue.
- If there is no further activity, this issue will be automatically closed in 15 days.
Thank you for your understanding and contribution!