Extracting parameters when memory is enabled does not work
Self Checks
- [X] This is only for bug report, if you would like to ask a question, please head to Discussions.
- [X] I have searched for existing issues search for existing issues, including closed ones.
- [X] I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
- [X] [FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
- [X] Please do not modify this template :) and fill in all the required fields.
Dify version
0.6.12
Cloud or Self Hosted
Self Hosted (Docker)
Steps to reproduce
- create any chatflow
- add extraction parameters block
- turn on memory in extraction parametres
✔️ Expected Behavior
extraction parameters
❌ Actual Behavior
success: 0 and random reason every time
To resolve the issue of receiving a "success: 0" message and a random reason when extracting parameters with memory enabled in Dify version 0.6.12 on a self-hosted Docker setup, follow these steps:
-
Check Docker Containers: Ensure all required Docker containers are running successfully:
docker compose ps -
Update Environment Variables: Sync your
.envfile with the latest configuration from.env.example:cd dify/docker cp .env.example .env -
Restart Dify: Apply the changes by restarting the Dify services:
docker compose down docker compose up -d -
Check Logs: If the issue persists, check the logs of the relevant containers to identify any specific errors. For example, to check the logs of the
apicontainer:docker compose logs api -
Cross-Domain Issues: Ensure CORS configurations are correctly set. Update the following configuration items in
docker-compose.ymlto the new domain:CONSOLE_API_URLCONSOLE_WEB_URLSERVICE_API_URLAPP_API_URLAPP_WEB_URL
-
Port Configuration: Ensure ports are correctly configured and not conflicting with other services. If port 80 is occupied, stop the service occupying port 80 or modify the port mapping in
docker-compose.yamlto map port 80 to another port.
By following these steps, you should be able to resolve the issue with parameter extraction in Dify [1][2].
To continue talking to Dosu, mention @dosu.
Can you give us more details regarding this issue? An example would be great, for instance, a input content and expected content.
@crazywoola yml of chatflow demo, in this chatflow parameter extraction memory is on and it does not work (success 0), if you turn off memory it will work with success 1. I also tested it with different LLMs, like qwen 7b and openchat 7b.
app: description: '' icon: "\U0001F916" icon_background: '#FFEAD5' mode: advanced-chat name: issue demo workflow: features: file_upload: image: enabled: false number_limits: 3 transfer_methods: - local_file - remote_url opening_statement: '' retriever_resource: enabled: true sensitive_word_avoidance: enabled: false speech_to_text: enabled: false suggested_questions: [] suggested_questions_after_answer: enabled: false text_to_speech: enabled: false language: '' voice: '' graph: edges: - data: sourceType: llm targetType: answer id: llm-answer source: llm sourceHandle: source target: answer targetHandle: target type: custom - data: isInIteration: false sourceType: start targetType: parameter-extractor id: 1723558713101-source-1723558759503-target source: '1723558713101' sourceHandle: source target: '1723558759503' targetHandle: target type: custom zIndex: 0 - data: isInIteration: false sourceType: parameter-extractor targetType: llm id: 1723558759503-source-llm-target source: '1723558759503' sourceHandle: source target: llm targetHandle: target type: custom zIndex: 0 nodes: - data: desc: '' selected: false title: Start type: start variables: [] height: 54 id: '1723558713101' position: x: 80 y: 282 positionAbsolute: x: 80 y: 282 sourcePosition: right targetPosition: left type: custom width: 244 - data: context: enabled: false variable_selector: [] desc: '' memory: role_prefix: assistant: '' user: '' window: enabled: false size: 10 model: completion_params: temperature: 0.3 mode: chat name: MultiCreator Lite provider: corpgpt prompt_template: - id: 131ec626-9b8c-42a5-afd6-6f1714652286 role: system text: you should give brief answers. selected: false title: LLM type: llm variables: [] vision: enabled: false height: 98 id: llm position: x: 680 y: 282 positionAbsolute: x: 680 y: 282 selected: false sourcePosition: right targetPosition: left type: custom width: 244 - data: answer: '{{#llm.text#}}' desc: '' selected: false title: Answer type: answer variables: [] height: 107 id: answer position: x: 980 y: 282 positionAbsolute: x: 980 y: 282 sourcePosition: right targetPosition: left type: custom width: 244 - data: desc: '' instruction: '' memory: query_prompt_template: '' role_prefix: assistant: '' user: '' window: enabled: false size: 50 model: completion_params: temperature: 0 mode: chat name: MultiCreator Lite provider: corpgpt parameters: - description: sentiment of last message name: message_sentiment required: true type: string - description: give sentiment of last 5 messages name: dialogue_sentiment required: true type: string query: - sys - query reasoning_mode: prompt selected: false title: Parameter Extractor type: parameter-extractor variables: [] height: 98 id: '1723558759503' position: x: 380 y: 282 positionAbsolute: x: 380 y: 282 selected: true sourcePosition: right targetPosition: left type: custom width: 244 viewport: x: -729 y: 34.5 zoom: 1
Hello I guess this yml is not useable, because it lost it's formatting, you could zip it and upload it again.
@crazywoola sorry) issue demo.yml.zip
Hello, I tried your yml file, but I am not sure about what to input.
Failed to extract result.from function call or text response,using empty result.
I have revisited the issue, from the error above, it seems qwen 7b and openchat 7b do not support function calling, maybe it's related to this setting.
Sorry for long reply @crazywoola
I've tested it on gpt3.5 and the result is same so I don't really think that problem is in open source 7b models.
It works when i change reasoning mode to func call, but it's still strange that it doesn't work for any model in prompt reasoning when memory is turned on.
It works when i change reasoning mode to func call, but it's still strange that it doesn't work for any model in prompt reasoning when memory is turned on.
Hi, may I ask if this issue has been resolved? I have also encountered the same problem where my memory is not working.
I have the same problem. its not fixed