dify
dify copied to clipboard
Workflow LLM node stop sequence to '\n' will be escape to '\\n'.
Self Checks
- [X] This is only for bug report, if you would like to ask a quesion, please head to Discussions.
- [X] I have searched for existing issues search for existing issues, including closed ones.
- [X] I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
- [X] Pleas do not modify this template :) and fill in all the required fields.
Dify version
0.6.6
Cloud or Self Hosted
Self Hosted (Docker)
Steps to reproduce
In workflow setting the stop sequence to '\n' in the LLM node will be escaped, LLM API receives it as '\\n'.
OpenAI-API-compatible Model Provider:
✔️ Expected Behavior
The stop sequence configured on the frontend same as finally called stop config.
❌ Actual Behavior
Configuration is escaped.
To address the issue where the stop sequence \n
is being escaped to \\n
in your Dify 0.6.6 workflow, ensure that the stop sequence is correctly specified in the model configuration parameters. The stop sequence should be included as a list of strings in the completion_params
of your model configuration, like so:
model_config = {
"name": "TheBloke/codecept-coder-6.7B-instruct-AI21",
"completion_params": {
"stop": ["\n"], // Directly specifying the newline character
# other parameters...
}
# other configuration...
}
Ensure that the stop
parameter is not modified or incorrectly escaped from the point of configuration to invocation. If the issue persists, review the code path for any transformations or re-escapings that might occur, especially in utility functions or methods that manipulate string parameters.
The Dify codebase handles the escaping of characters in the stop sequence configuration for the LLM node in methods like _transform_chat_json_prompts
and _transform_completion_json_prompts
within the OpenAILargeLanguageModel
class. These methods append specific escape sequences to the stop sequences for JSON or XML response formats, ensuring correct interpretation and handling.
If you're still facing issues, providing more details about your setup, configuration, and the specific part of your workflow where this issue occurs could help in identifying a more precise solution.
Details
Sources
- api/core/model_runtime/model_providers/openai/llm/llm.py
- api/core/workflow/nodes/llm/llm_node.py
- zh_CN/guides/workflow/node/llm.md
- en/features/workflow/node/llm.md
- api/core/model_runtime/model_providers/anthropic/llm/llm.py
About Dosu This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
To continue the conversation, mention @dosu.
Has anyone been working on solving this issue? I have encountered the same problem.
Can you try the latest version?
In version 0.6.12-fix1, this issue still exists. I will try the latest version later.
Can you try the latest version?
same issue in 0.6.13
Maybe this is connected or related also to #6208