Langfuse v3+ not displaying the cost
Self Checks
- [x] This is only for bug report, if you would like to ask a question, please head to Discussions.
- [x] I have searched for existing issues search for existing issues, including closed ones.
- [x] I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
- [x] [FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
- [x] Please do not modify this template :) and fill in all the required fields.
Dify version
0.15.3
Cloud or Self Hosted
Self Hosted (Docker)
Steps to reproduce
Attach LangFuse v3 to a dify workflow that uses LLM nodes and Knowledge Retrieval nodes
✔️ Expected Behavior
Every node that uses LLM:
- Knowledge Retrieval
- Parameter Extractor
- LLM Should report the number of tokens used and the model_id of the model used
❌ Actual Behavior
PR https://github.com/langgenius/dify/pull/13117 Supposedly fixed this issue, but in langfuse I keep seeing model_name instead of model_id and in some nodes I don't even see model_name
The issue you're experiencing with Langfuse not parsing the model_id and instead showing model_name, or missing it entirely, seems to be related to the changes made in pull request #13117. This pull request was intended to address the reporting of model usage in LLM nodes and Knowledge Retrieval nodes, but it appears that the fix may not have fully resolved the issue.
Currently, Dify handles the reporting of model_id and model_name by including them in the process_data dictionary for LLM nodes. The model_name is stored under the key "model_name", and the model_mode is stored under the key "model_mode" [1]. However, if you're seeing discrepancies in Langfuse, it might be due to how this data is being processed or reported in the integration.
To resolve this, you might want to check the implementation details in your setup to ensure that the process_data is correctly populated and reported. Additionally, reviewing the changes in the pull request #13117 might provide insights into any potential oversights or areas that need further adjustment. If the issue persists, it may require further investigation or a follow-up fix in the Dify repository.
To continue talking to Dosu, mention @dosu.
Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other
Anyone from the team willing to fix the LangFuse integration? It is stated in their official docs: https://python.reference.langfuse.com/langfuse/client#Langfuse.generation some properties are deprecated
And langfuse_trace.py clearly still uses them and is not passing data in the correct way (no input token, output token, just total token, no model name, etc...)
Any news? Am I the only one using Langfuse v3+ that cannot see the costs , token in, token out ? :(
I want to add that LLM nodes that errors out, will report the price (calculated by LangFuse)
Succesful nodes will not
This could restrict the possibilities to an issue with the total_price and currency metadata Even though I don't know how LangFuse was able to calculate token_in and token_out in case of error
bumping this issue.
One of the features that sets Dify apart from other tools is the analytical / logging part. How comes that no one cares that the LangFuse implementation is not working as expected?
@crazywoola it's been a month since this issue was opened, can someone at least acknowledge this bug? Not asking to fix it right away, if there's something more important, but the inability to track prices per-user is becoming a real issue, and with an ever-growing list of issues it feels like those that not on the main page are ignored until dosubot closes them because stale
I’ve already fixed the tracking of Parameter Extractor and LLM usage data in Langfuse. As for Knowledge Retrieval, we’re currently not collecting token or model usage statistics for that component.