Not all the content of answer nodes needs to be added to the memory of LLM nodes (such as echarts data), a switch needs to be added to control this behavior
Self Checks
- [x] I have searched for existing issues search for existing issues, including closed ones.
- [x] I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
- [x] [FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
- [x] Please do not modify this template :) and fill in all the required fields.
1. Is this request related to a challenge you're experiencing? Tell me about your story.
Not all the content of answer nodes needs to be added to the memory of LLM nodes (such as echarts data), a switch needs to be added to control this behavior
2. Additional context or comments
No response
3. Can you help us with this feature?
- [ ] I am interested in contributing to this feature.
I think you can do this by turning off the memory function and use conversation variables to control it by yourself. It's rather complicated to control what you want from the answer.
I also have the same problem. In workflow-based conversations (chatflow), the workflow contains multiple LLM nodes, such as intent recognition, problem reorganization, SQL generation, and so on. After enabling the memory function, I found that the results of each LLM historical dialogue replied by the large model were the final results of the workflow execution rather than the dialogue results corresponding to the LLM node's history. This will cause LLM nodes to generate errors in their responses based on historical conversations. I hope this can be improved
https://github.com/langgenius/dify/discussions/20413
Hi, @yg2024. I'm Dosu, and I'm helping the Dify team manage their backlog and am marking this issue as stale.
Issue Summary:
- You requested a feature to add a switch for controlling whether answer node content (e.g., echarts data) is stored in LLM node memory for selective inclusion.
- A contributor suggested disabling memory and using conversation variables as a workaround, though it is complex.
- Another user pointed out issues with memory in workflow-based conversations where LLM nodes receive final workflow results instead of node-specific history, causing errors.
- A related discussion link was shared for additional context.
- The issue remains unresolved with no recent updates.
Next Steps:
- Please let me know if this issue is still relevant to the latest version of Dify by commenting here to keep the discussion open.
- Otherwise, I will automatically close this issue in 15 days.
Thank you for your understanding and contribution!