dify
dify copied to clipboard
The frontend cannot show the generated text by llm
Self Checks
- [X] This is only for bug report, if you would like to ask a quesion, please head to Discussions.
- [X] I have searched for existing issues search for existing issues, including closed ones.
- [X] I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
- [X] Pleas do not modify this template :) and fill in all the required fields.
Dify version
0.6.6
Cloud or Self Hosted
Self Hosted (Docker)
Steps to reproduce
I deplyed a local openai-api compatible server, but the frontend cannot show the generated text by llm. But from the logs of my local openai-api compatible server, i can see the generated text by llm which indicates that it realy generated text. See the right dialogue below.
✔️ Expected Behavior
The frontend can show the text generated by llm.
❌ Actual Behavior
the frontend cannot show the generated text by llm
It looks like the issue with the frontend not displaying the generated text from your LLM might be due to how the Docker network configuration is set up, particularly regarding how your services communicate with each other. Here's a more streamlined approach to troubleshoot and potentially resolve this issue:
-
Network Configuration: Ensure your LLM service is accessible from within Docker by binding it to
0.0.0.0
instead oflocalhost
or127.0.0.1
. This makes the service available on your local network. -
Correct IP Address: In your Dify configuration, use the actual IP address of the machine where the LLM service is running instead of
localhost
. This could be the local network IP (e.g.,192.168.x.x
) or the Docker host IP if you're running everything on the same machine but in different containers. -
Check Dify Frontend Configurations: Verify that the model name, base URL, and other relevant settings in Dify are correctly configured to match your LLM service. This includes setting the correct model type and ensuring the base URL points to where your LLM service is accessible.
-
Review Logs for Specific Errors: If the problem persists, check the frontend logs for any specific error messages that could give more insight into the issue.
By following these steps, you should be able to diagnose and fix the problem with the frontend not displaying the generated text. If you encounter any specific errors or need further assistance, feel free to ask.
Details
Sources
- Why the frontend cannot show the generated text by llm?
- en/tutorials/model-configuration/ollama.md
- web/app/components/app/configuration/debug/index.tsx
- en/tutorials/model-configuration/localai.md
- api/controllers/console/error.py
- en/user-guide/launching-dify-apps/developing-with-apis/README.md
- zh_CN/guides/application-publishing/developing-with-apis.md
- api/core/workflow/nodes/llm/llm_node.py
- api/core/model_runtime/model_providers/triton_inference_server/llm/llm.py
About Dosu This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
To continue the conversation, mention @dosu.
My local openai-api-compatible server only implements the /v1/models
and /v1/chat/completions
apis.
And the /v1/chat/completions
api also passed the integration test
dify/api/tests/integration_tests/model_runtime/openai_api_compatible/test_llm.py at 0.6.4 · langgeni