dify
dify copied to clipboard
OpenAI-API-compatible model can't support Perplexity api
Self Checks
- [X] This is only for bug report, if you would like to ask a quesion, please head to Discussions.
- [X] I have searched for existing issues search for existing issues, including closed ones.
- [X] I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
- [X] Pleas do not modify this template :) and fill in all the required fields.
Dify version
0.6.5
Cloud or Self Hosted
Cloud
Steps to reproduce
-
Add Perplexity api model into OpenAI-API-compatible and save
-
Use Perplexity model in workspace app, ASSISTANT's output always empty
-
The browser API call received a response, but the answer field was empty
-
I tried using this Perplexity api key in python client, it's ok
✔️ Expected Behavior
Perplexity model works
❌ Actual Behavior
Perplexity model return empty response
It appears the issue with the Perplexity model returning an empty response in the Dify workspace app might stem from differences in response handling between the app and the Python client. Specifically, the app's handling of responses, including error conversion and metadata simplification, could be leading to the observed behavior. The workspace app's specific methods for processing the 'choices' field in the model's response might also play a role.
To address this, consider examining how the workspace app processes responses, especially looking into the handling of errors and the 'choices' field. Utilizing the Logs feature in Dify could provide further insights into the issue by recording interactions with the application.
Ensure the Perplexity model is correctly configured in the Dify workspace app, as incorrect settings could also lead to empty responses. This includes checking configurations related to Hugging Face models, if applicable.
For more detailed troubleshooting, refer to the Dify documentation on logs, model configuration, and handling of model responses.
Details
Sources
- api/core/app/apps/base_app_generate_response_converter.py
- api/core/model_runtime/model_providers/xinference/llm/llm.py
- en/features/logs.md
- en/tutorials/model-configuration/README.md
- en/tutorials/model-configuration/replicate.md
- en/tutorials/model-configuration/hugging-face.md
About Dosu This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
To continue the conversation, mention @dosu.
I suspect the problem is in the code api/core/model_runtime/model_providers/openai_api_compatible/llm/llm.py, which looks like it implements its own parsing of the OpenAI-compatible protocols, rather than using the OpenAI client directly.
I suspect the problem is in the code api/core/model_runtime/model_providers/openai_api_compatible/llm/llm.py, which looks like it implements its own parsing of the OpenAI-compatible protocols, rather than using the OpenAI client directly.
OpenAI compatible is meant for models supported in the same format. So we didn't use the client directly.
I suspect the problem is in the code api/core/model_runtime/model_providers/openai_api_compatible/llm/llm.py, which looks like it implements its own parsing of the OpenAI-compatible protocols, rather than using the OpenAI client directly.
OpenAI compatible is meant for models supported in the same format. So we didn't use the client directly.
OK, but I think there may be a bug in the code implementation, because the Perplexity api is also compatible with the OpenAI api protocol, but the call to Model did not return a valid result :(
Pls specify the delimiter for streaming result to "\n" instead of "\n\n" by default while setup credentials, this issue is due to pplx api is not a standard OpenAI compatible interface.
Pls specify the delimiter for streaming result to "\n" instead of "\n\n" by default while setup credentials, this issue is due to pplx api is not a standard OpenAI compatible interface.
Cool, it works! Thanks.