dify
dify copied to clipboard
Text Generator Internal prompt completion errors
Self Checks
- [X] This is only for bug report, if you would like to ask a question, please head to Discussions.
- [X] I have searched for existing issues search for existing issues, including closed ones.
- [X] I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
- [X] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
- [X] Please do not modify this template :) and fill in all the required fields.
Dify version
0.6.13
Cloud or Self Hosted
Self Hosted (Docker)
Steps to reproduce
When the large model type is a continuation large model, an error in assembling the built-in prompt words causes the large model to return abnormally. Steps to reproduce:
- Add codeqwen1.5 as a code continuation large model.
- Add a text generation application, select codeqwen1.5 as the code continuation large model.
- Set the prompt word to: <fim_prefix>{{code_pre}}<fim_suffix>{{code_sur}}<fim_middle>
- Enter any text for code_pre and code_sur, click generate, and the prompt word received by the large model will become: <fim_prefix>xxxxxx<fim_suffix>xxxxxxx<fim_middle>\n{{#query#}}
The reason is: In the core\prompt\simple_prompt_transform.py file, the get_prompt_template method has an incorrect logic for assembling the prompt. When the type is COMPLETION or WORKFLOW, there is no query variable, but additional \n and {{#query#}} are still being concatenated.
✔️ Expected Behavior
The correct prompt is: <fim_prefix>xxxxxx<fim_suffix>xxxxxxx<fim_middle>
❌ Actual Behavior
The prompt received by the large model is: <fim_prefix>xxxxxx<fim_suffix>xxxxxxx<fim_middle>\n{{#query#}}