Token Count Shows Zero in Workflow Logs When Using OpenAI-API-Compatible Models
Self Checks
- [x] This is only for bug report, if you would like to ask a question, please head to Discussions.
- [x] I have searched for existing issues search for existing issues, including closed ones.
- [x] I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
- [x] [FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
- [x] Please do not modify this template :) and fill in all the required fields.
Dify version
1.0.0
Cloud or Self Hosted
Self Hosted (Docker)
Steps to reproduce
Create a workflow using any OpenAI-API-compatible model endpoint Execute the workflow Check execution logs in workflow monitoring Observe token count shows 0 (1.0.0) vs correct count (0.15.3)
✔️ Expected Behavior
Token consumption should be calculated and displayed normally like in 0.15.3
❌ Actual Behavior
1.0.0 shows prompt_tokens: 0
0.15.3 shows correct token counts with same workflow/model
@zzzdajb Hello! I'm here to help you with bug fixes, answer questions, and assist you while waiting for a human maintainer. Let's tackle this together!
I found two similar issues that might be relevant to your problem:
-
For each node is not correctly counted in token usage: This issue was closed with a pull request that fixed the token count at the iteration node [1].
-
Upgrade to 0.8.3, metadata still lacks the return of token usage information: This issue was resolved by a pull request that addressed an assignment exception [2].
These issues indicate that there have been previous problems with token counting, which were resolved through code changes. It might be worth checking if similar fixes could apply to your situation in version 1.0.0. Additionally, ensure that your configuration settings, such as MAXIMUM_CHUNK_TOKEN_LENGTH, are correctly set, as they can affect token counting [3].
To continue talking to Dosu, mention @dosu.
Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other
In order to control variables, I tried testing using the same API at the same time. I have tried to control the vast majority of variables to be the same, but the problem persists. Therefore, I believe this is a problem caused by the version update. I guess that perhaps the old version provided a default tokenizer, while the new version does not?
+1, 我也遇到了同样的问题。 +1, I had the same problem.
I also encountered this problem { "text": "print………………", "usage": { "prompt_tokens": 0, "prompt_unit_price": "0", "prompt_price_unit": "0", "prompt_price": "0", "completion_tokens": 0, "completion_unit_price": "0", "completion_price_unit": "0", "completion_price": "0", "total_tokens": 0, "total_price": "0", "currency": "USD", "latency": 3.0446406310002203 }, "finish_reason": "stop" }
@crazywoola Same here. Could you please review this issue? It appears to be identical to #14833
+1,The same problem
+1,The same problem +1,
+1,同样的问题
+1,同样的问题
Please try update OpenAI-API-compatible to latest, current ver is 0.0.16
Btw the latest Dify no longer shows total token count at the front, but you can still view then inside the details.