Streaming Response Broken: Cross-use the Question Classification component.
Self Checks
- [x] This is only for bug report, if you would like to ask a question, please head to Discussions.
- [x] I have searched for existing issues search for existing issues, including closed ones.
- [x] I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
- [x] [FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
- [x] Please do not modify this template :) and fill in all the required fields.
Dify version
0.15.3
Cloud or Self Hosted
Self Hosted (Docker)
Steps to reproduce
Problem Description
When I use two Question Classification components and perform cross-aggregation, streaming output is not possible.
The same DSL file can achieve normal streaming output in version 0.10.2.
Test case
input
1 what's AI
screen shoot
DSL file
app:
description: ''
icon: https://zswl-nav.oss-cn-hangzhou.aliyuncs.com/big-dipper/agent_app_icon.svg
icon_background: ''
mode: advanced-chat
name: 分类测试
use_icon_as_answer_icon: false
kind: app
version: 0.1.5
workflow:
conversation_variables: []
environment_variables: []
features:
file_upload:
allowed_file_extensions:
- .JPG
- .JPEG
- .PNG
- .GIF
- .WEBP
- .SVG
allowed_file_types:
- image
allowed_file_upload_methods:
- local_file
- remote_url
enabled: false
fileUploadConfig:
audio_file_size_limit: 50
batch_count_limit: 5
file_size_limit: 15
image_file_size_limit: 10
video_file_size_limit: 100
workflow_file_upload_limit: 10
image:
enabled: false
number_limits: 3
transfer_methods:
- local_file
- remote_url
number_limits: 3
opening_statement: ''
retriever_resource:
enabled: true
sensitive_word_avoidance:
enabled: false
speech_to_text:
enabled: false
suggested_questions: []
suggested_questions_after_answer:
enabled: false
text_to_speech:
enabled: false
language: ''
voice: ''
graph:
edges:
- data:
isInIteration: false
sourceType: start
targetType: if-else
id: 1741752284478-source-1741752315248-target
source: '1741752284478'
sourceHandle: source
target: '1741752315248'
targetHandle: target
type: custom
zIndex: 0
- data:
isInIteration: false
sourceType: variable-aggregator
targetType: llm
id: 1741752409392-source-1741752476950-target
source: '1741752409392'
sourceHandle: source
target: '1741752476950'
targetHandle: target
type: custom
zIndex: 0
- data:
isInIteration: false
sourceType: llm
targetType: answer
id: 1741752476950-source-1741752500566-target
source: '1741752476950'
sourceHandle: source
target: '1741752500566'
targetHandle: target
type: custom
zIndex: 0
- data:
isInIteration: false
sourceType: variable-aggregator
targetType: llm
id: 1741752422390-source-17417525115180-target
source: '1741752422390'
sourceHandle: source
target: '17417525115180'
targetHandle: target
type: custom
zIndex: 0
- data:
isInIteration: false
sourceType: llm
targetType: answer
id: 17417525115180-source-1741752521689-target
source: '17417525115180'
sourceHandle: source
target: '1741752521689'
targetHandle: target
type: custom
zIndex: 0
- data:
isInIteration: false
sourceType: if-else
targetType: question-classifier
id: 1741752315248-true-1741753169252-target
source: '1741752315248'
sourceHandle: 'true'
target: '1741753169252'
targetHandle: target
type: custom
zIndex: 0
- data:
isInIteration: false
sourceType: question-classifier
targetType: variable-aggregator
id: 1741753169252-1-1741752409392-target
source: '1741753169252'
sourceHandle: '1'
target: '1741752409392'
targetHandle: target
type: custom
zIndex: 0
- data:
isInIteration: false
sourceType: if-else
targetType: question-classifier
id: 1741752315248-false-1741753191614-target
source: '1741752315248'
sourceHandle: 'false'
target: '1741753191614'
targetHandle: target
type: custom
zIndex: 0
- data:
isInIteration: false
sourceType: question-classifier
targetType: variable-aggregator
id: 1741753169252-2-1741752422390-target
source: '1741753169252'
sourceHandle: '2'
target: '1741752422390'
targetHandle: target
type: custom
zIndex: 0
- data:
isInIteration: false
sourceType: question-classifier
targetType: variable-aggregator
id: 1741753191614-2-1741752422390-target
source: '1741753191614'
sourceHandle: '2'
target: '1741752422390'
targetHandle: target
type: custom
zIndex: 0
- data:
isInIteration: false
sourceType: question-classifier
targetType: variable-aggregator
id: 1741753191614-1-1741752409392-target
source: '1741753191614'
sourceHandle: '1'
target: '1741752409392'
targetHandle: target
type: custom
zIndex: 0
nodes:
- data:
desc: ''
selected: false
title: 开始
type: start
variables: []
height: 53
id: '1741752284478'
position:
x: -129.38320907155503
y: 234.43171572743677
positionAbsolute:
x: -129.38320907155503
y: 234.43171572743677
selected: true
sourcePosition: right
targetPosition: left
type: custom
width: 244
- data:
cases:
- case_id: 'true'
conditions:
- comparison_operator: contains
id: fbc8fda1-1261-419c-a35e-f683f0852888
value: '1'
varType: string
variable_selector:
- sys
- query
id: 'true'
logical_operator: and
desc: ''
selected: false
title: 条件分支
type: if-else
height: 125
id: '1741752315248'
position:
x: 249.47152014309233
y: 234.43171572743677
positionAbsolute:
x: 249.47152014309233
y: 234.43171572743677
selected: false
sourcePosition: right
targetPosition: left
type: custom
width: 244
- data:
desc: ''
output_type: string
selected: false
title: 变量聚合器
type: variable-aggregator
variables:
- - sys
- query
height: 108
id: '1741752409392'
position:
x: 1119.3936288724205
y: 50.819810969560464
positionAbsolute:
x: 1119.3936288724205
y: 50.819810969560464
selected: false
sourcePosition: right
targetPosition: left
type: custom
width: 244
- data:
desc: ''
output_type: string
selected: false
title: 变量聚合器 2
type: variable-aggregator
variables:
- - sys
- query
height: 108
id: '1741752422390'
position:
x: 1084.9316558973724
y: 614.2042078714989
positionAbsolute:
x: 1084.9316558973724
y: 614.2042078714989
selected: false
sourcePosition: right
targetPosition: left
type: custom
width: 244
- data:
context:
enabled: false
variable_selector: []
desc: ''
model:
completion_params:
temperature: 0.7
mode: chat
name: qwen2.5-7b-instruct
provider: tongyi
prompt_template:
- id: 441088fc-999d-4fb0-8fb8-7434b4b4fe10
role: system
text: 你是一个智能助手
- id: 0f7716c4-be2e-4064-b186-1ed4c580e98d
role: user
text: '{{#sys.query#}}'
selected: false
title: LLM 2
type: llm
variables: []
vision:
enabled: false
height: 97
id: '1741752476950'
position:
x: 1479.498326710238
y: 50.819810969560464
positionAbsolute:
x: 1479.498326710238
y: 50.819810969560464
selected: false
sourcePosition: right
targetPosition: left
type: custom
width: 244
- data:
answer: '{{#1741752476950.text#}}'
desc: ''
selected: false
title: 直接回复
type: answer
variables: []
height: 102
id: '1741752500566'
position:
x: 1814.5431463289553
y: 50.819810969560464
positionAbsolute:
x: 1814.5431463289553
y: 50.819810969560464
selected: false
sourcePosition: right
targetPosition: left
type: custom
width: 244
- data:
context:
enabled: false
variable_selector: []
desc: ''
model:
completion_params:
temperature: 0.7
mode: chat
name: qwen2.5-7b-instruct
provider: tongyi
prompt_template:
- id: 441088fc-999d-4fb0-8fb8-7434b4b4fe10
role: system
text: 你是一个智能助手
- id: 0f7716c4-be2e-4064-b186-1ed4c580e98d
role: user
text: '{{#sys.query#}}'
selected: false
title: LLM 2 (1)
type: llm
variables: []
vision:
enabled: false
height: 97
id: '17417525115180'
position:
x: 1399.1401097354772
y: 607.336782222121
positionAbsolute:
x: 1399.1401097354772
y: 607.336782222121
selected: false
sourcePosition: right
targetPosition: left
type: custom
width: 244
- data:
answer: '{{#17417525115180.text#}}'
desc: ''
selected: false
title: 直接回复 2
type: answer
variables: []
height: 102
id: '1741752521689'
position:
x: 1749.2303080169243
y: 607.336782222121
positionAbsolute:
x: 1749.2303080169243
y: 607.336782222121
selected: false
sourcePosition: right
targetPosition: left
type: custom
width: 244
- data:
classes:
- id: '1'
name: AI
- id: '2'
name: NBA
desc: ''
instructions: ''
model:
completion_params:
temperature: 0.7
mode: chat
name: qwen2.5-7b-instruct
provider: tongyi
query_variable_selector:
- '1741752284478'
- sys.query
selected: false
title: 问题分类器
topics: []
type: question-classifier
vision:
enabled: false
height: 175
id: '1741753169252'
position:
x: 689.7271713683517
y: -74.11560036105249
positionAbsolute:
x: 689.7271713683517
y: -74.11560036105249
selected: false
sourcePosition: right
targetPosition: left
type: custom
width: 244
- data:
classes:
- id: '1'
name: AI
- id: '2'
name: NBA
desc: ''
instructions: ''
model:
completion_params:
temperature: 0.7
mode: chat
name: qwen2.5-7b-instruct
provider: tongyi
query_variable_selector:
- '1741752284478'
- sys.query
selected: false
title: 问题分类器 2
topics: []
type: question-classifier
vision:
enabled: false
height: 175
id: '1741753191614'
position:
x: 580.3201175414563
y: 490.70049614910124
positionAbsolute:
x: 580.3201175414563
y: 490.70049614910124
selected: false
sourcePosition: right
targetPosition: left
type: custom
width: 244
viewport:
x: -179.47546784855808
y: 546.9575769227345
zoom: 0.5946035697843213
✔️ Expected Behavior
Streaming Response
❌ Actual Behavior
Streaming Response Broken
I tried update #14065 , but it still didn't work.
@zhudongwork Hello there! I'm here to help you with any bugs, questions, or contributions while waiting for a human maintainer.
I found a similar unsolved discussion regarding non-streaming output from LLM models in Dify, which might be related to your issue [1].
In Dify version 0.15.3, streaming output issues can occur when using multiple conditional branches without proper aggregation. To address this, ensure that a variable aggregator is used after the conditional branches. This is crucial for maintaining continuous streaming output [2].
Additionally, make sure that the exception handling setting is disabled in your workflow configuration, as it can interfere with streaming output functionality. Also, configure specific environment variables in your .env file to prevent stream response interruptions by setting appropriate timeout values [3][4].
If these steps do not resolve the issue, further investigation into the specific configuration of your workflow might be necessary.
To continue talking to Dosu, mention @dosu.
Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other
same problem,please fix it!
same problem
my version is 1.1.2
the problem is the if else tool ,when i delete the if else tool,steaming can work
Hi, @zhudongwork. I'm Dosu, and I'm helping the Dify team manage their backlog. I'm marking this issue as stale.
Issue Summary:
- A bug in Dify version 0.15.3 causes streaming output to fail with two Question Classification components in a Docker setup.
- The issue is a regression from version 0.10.2, where it did not exist.
- Update #14065 was attempted but did not resolve the issue.
- Other users have reported similar problems, with a workaround involving removing conditional branching.
Next Steps:
- Please confirm if this issue is still relevant to the latest version of the Dify repository by commenting here.
- If no updates are provided, the issue will be automatically closed in 15 days.
Thank you for your understanding and contribution!