dify icon indicating copy to clipboard operation
dify copied to clipboard

Workflow LLM node stop sequence to '\n' will be escape to '\\n'.

Open sagarey opened this issue 9 months ago • 6 comments

Self Checks

  • [X] This is only for bug report, if you would like to ask a quesion, please head to Discussions.
  • [X] I have searched for existing issues search for existing issues, including closed ones.
  • [X] I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
  • [X] Pleas do not modify this template :) and fill in all the required fields.

Dify version

0.6.6

Cloud or Self Hosted

Self Hosted (Docker)

Steps to reproduce

In workflow setting the stop sequence to '\n' in the LLM node will be escaped, LLM API receives it as '\\n'. 屏幕截图 2024-05-09 010741

OpenAI-API-compatible Model Provider: image

✔️ Expected Behavior

The stop sequence configured on the frontend same as finally called stop config. image

❌ Actual Behavior

Configuration is escaped. 屏幕截图 2024-05-09 011833

sagarey avatar May 08 '24 17:05 sagarey

To address the issue where the stop sequence \n is being escaped to \\n in your Dify 0.6.6 workflow, ensure that the stop sequence is correctly specified in the model configuration parameters. The stop sequence should be included as a list of strings in the completion_params of your model configuration, like so:

model_config = {
    "name": "TheBloke/codecept-coder-6.7B-instruct-AI21",
    "completion_params": {
        "stop": ["\n"],  // Directly specifying the newline character
        # other parameters...
    }
    # other configuration...
}

Ensure that the stop parameter is not modified or incorrectly escaped from the point of configuration to invocation. If the issue persists, review the code path for any transformations or re-escapings that might occur, especially in utility functions or methods that manipulate string parameters.

The Dify codebase handles the escaping of characters in the stop sequence configuration for the LLM node in methods like _transform_chat_json_prompts and _transform_completion_json_prompts within the OpenAILargeLanguageModel class. These methods append specific escape sequences to the stop sequences for JSON or XML response formats, ensuring correct interpretation and handling.

If you're still facing issues, providing more details about your setup, configuration, and the specific part of your workflow where this issue occurs could help in identifying a more precise solution.

Details

Sources


About Dosu This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

dosubot[bot] avatar May 08 '24 19:05 dosubot[bot]

Has anyone been working on solving this issue? I have encountered the same problem.

iYaDongWang avatar Jun 19 '24 02:06 iYaDongWang

Can you try the latest version?

crazywoola avatar Jul 09 '24 14:07 crazywoola

In version 0.6.12-fix1, this issue still exists. I will try the latest version later.

iYaDongWang avatar Jul 10 '24 01:07 iYaDongWang

Can you try the latest version?

same issue in 0.6.13

sagarey avatar Jul 11 '24 03:07 sagarey

Maybe this is connected or related also to #6208

sepa85 avatar Jul 11 '24 18:07 sepa85