dify
dify copied to clipboard
After the model deployed with openLLM was added withdify-0.63, more answers came from Human.
Self Checks
- [X] This is only for bug report, if you would like to ask a quesion, please head to Discussions.
- [X] I have searched for existing issues search for existing issues, including closed ones.
- [X] I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
- [X] Pleas do not modify this template :) and fill in all the required fields.
Dify version
0.6.3
Cloud or Self Hosted
Self Hosted (Docker)
Steps to reproduce
For a model deployed with openLLM, after adding it withdify-0.63, the answer will be "Human", or "Human </histories", but when I call the locally deployed api myself, the answer is normal.
✔️ Expected Behavior
No human and other symbols
❌ Actual Behavior
human or "Human </histories"
The issue you're experiencing with responses being "Human" or "Human </histories" when using Dify-0.63 with a model deployed with openLLM seems to be related to the response processing within Dify's handling methods. Specifically, the methods _handle_chat_generate_stream_response
and _handle_chat_generate_response
in the OpenLLMLargeLanguageModel
class are likely not filtering or interpreting the model's raw output correctly before constructing the output messages.
These methods directly use the model's response content to create AssistantPromptMessage
objects without additional processing to remove or handle special formatting or control characters. This direct approach might cause discrepancies between the expected output and what Dify generates, especially if the model's response includes special markers or formatting instructions.
To address this, you might need to adjust the implementation of these methods to include logic for correctly processing the model's output. This could involve stripping out unwanted control strings or formatting instructions from the response.content
or message.content
before they are used to construct the output messages. Implementing such logic will help ensure that the output matches the expected format, eliminating the issue of unexpected "Human" or "Human </histories" responses.
For a detailed look at how these methods are currently implemented and where adjustments might be necessary, you can refer to the Dify codebase, specifically within the OpenLLMLargeLanguageModel
class. This will give you a clearer idea of how responses are handled and where modifications can be made to resolve the issue.
Sources
About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
There is same issue with huggingface Model and mistralai/Mistral-7B-Instruct-v0.2 API endpoints
Step the stop words maybe?
Hi, @upseem,
I'm helping the Dify team manage their backlog and am marking this issue as stale. From what I understand, you encountered unexpected responses when using Dify-0.6.3 with a model deployed with openLLM. The issue seems to be related to the response processing within Dify's handling methods, specifically in the OpenLLMLargeLanguageModel
class. There have been insights and suggestions provided by users dosubot, vccler, and crazywoola for addressing the issue, including adjusting the implementation of specific methods and considering the handling of stop words.
Could you please confirm if this issue is still relevant to the latest version of the Dify repository? If it is, please let the Dify team know by commenting on the issue. Otherwise, feel free to close the issue yourself, or the issue will be automatically closed in 7 days. Thank you!
Please upgrade to latest version to see if this issue persists. :)