After the client actively disconnects, the status of the workflow will not be updated
Self Checks
- [x] This is only for bug report, if you would like to ask a question, please head to Discussions.
- [x] I have searched for existing issues search for existing issues, including closed ones.
- [x] I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
- [x] [FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
- [x] Please do not modify this template :) and fill in all the required fields.
Dify version
1.4.1
Cloud or Self Hosted
Self Hosted (Docker)
Steps to reproduce
Question:
my test client request is like: /console/api/apps/ac5e06f4-bfe6-4344-b416-b2e54724af79/workflows/draft/run
this workflow may take two minutes to finish.
if I close the connection before workflow finish, like the picture below, I click the the 'close' buttion, the workflow status will not be updted.
In fact, the independent worker thread in the background has completed the workflow normally. The real issue lies in the logic handled by the generator in interaction with the client.
The active closure of the client will cause the yield from response in stream_with_context(generate()) to throw a GeneratorExit exception, thereby preventing normal iteration.
This in turn leads to the inability of the yield self._base_task_pipeline._ping_stream_response() in the queue processing function to return properly, resulting in the failure to update the status information of the workflow_runs table and the workflow_node_executions table despite the workflow running normally.
The final displayed status of the workflow and nodes in the frontend is "running", even though they have actually completed successfully. like below picture
My attempt to fix the issue
I am not very familiar with Python syntax, but I made some attempts and found that it worked in actual testing.
By using for instead of yield from and handling exceptions to trigger the next call of the generator, allowing the yield StreamResponse in the queue processing function _process_stream_response to return properly, you have successfully ensured that the workflow and node statuses can be updated correctly even after actively closing the client. This approach appears to have resolved the issue and allowed for the proper updating of workflow and node statuses in this scenario.
✔️ Expected Behavior
No response
❌ Actual Behavior
No response
You have correctly identified a known issue in Dify where, if the client disconnects during a long-running workflow (such as by closing the browser or terminating the request), the workflow status in the frontend remains "running" even though the backend process completes successfully. This happens because the generator handling the streaming response throws a GeneratorExit exception, which interrupts the normal status update logic and prevents the workflow and node statuses from being updated in the database.
Your analysis is accurate: the problem is rooted in how the generator is interrupted and how status updates are tied to the streaming context. Your workaround—using a for loop instead of yield from and explicitly handling exceptions to ensure the generator completes its iteration—aligns with best practices for ensuring cleanup and status updates even when the client disconnects.
There is also a related architectural recommendation: decouple the database update operations from the main thread that handles the streaming response. By moving these updates to a separate thread, you can ensure that workflow and node statuses are updated in the database even if the main thread is killed due to a client disconnect. This approach is especially important in streaming mode, where the issue is most likely to occur. For more details, see this discussion: Dify issue #12798.
In summary:
- Your fix (using a
forloop and handling exceptions) is a valid approach and has been shown to work in practice. - For a more robust solution, consider decoupling status updates from the streaming response thread, as recommended in the linked issue.
- No similar issues or discussions were found in the repository that address this exact scenario, so your findings and workaround are valuable for others facing the same problem.
If you need further guidance on implementing the threading approach or want to discuss potential side effects of your fix, feel free to provide more details or ask specific questions.
To continue talking to Dosu, mention @dosu-bot.
Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other
mark
please give me a dsl that can reproduce this issue
mark
mark
有缘人改吧,断开response 流生成器直接断了,后续日志 message 如果没有 commit 就会丢失。
有缘人改吧,断开response 流生成器直接断了,后续日志 message 如果没有 commit 就会丢失。
我按照贴主那个方法改了先这么用着,期待有更好的解决方法
Hi, @Madiao. I'm Dosu, and I'm helping the Dify team manage their backlog and am marking this issue as stale.
Issue Summary:
- You reported that in version 1.4.1, workflow status remains "running" on the frontend if a client disconnects before completion due to a GeneratorExit exception.
- This happens because streaming response generators interrupt status updates when the client disconnects.
- A suggested workaround involves explicit exception handling and decoupling database updates from the streaming thread.
- Other users have noted similar observations and expressed interest in more robust architectural solutions.
- The issue remains unresolved and highlights the need for changes to ensure accurate status updates despite client disconnects.
Next Steps:
- Please let me know if this issue is still relevant with the latest version of Dify by commenting here to keep the discussion open.
- Otherwise, this issue will be automatically closed in 15 days.
Thank you for your understanding and contribution!