Langfuse & LangSmith integration fails to track cost and usage in agentic flows
Describe the bug
Describe the bug The integration with observability platforms (both Langfuse and LangSmith) is not working correctly for cost and token usage tracking when using agentic flows (e.g., AgentFlow, ConditionAgent).
While traces are successfully sent and appear in the dashboards of both platforms, all cost and token usage metrics remain at $0.00. The model name and token counts are not being registered for any of the LLM generation steps.
To Reproduce
Steps to reproduce the behavior:
Create a chatflow using an AgentFlow structure, for example, a ConditionAgent that routes to different LLM or Agent nodes.
Configure the LLM/Agent nodes to use a standard OpenAI model, such as gpt-4o-mini.
Critically, ensure the "Streaming" option is turned OFF for all these nodes.
Add either the Langfuse or LangSmith handler to the chatflow settings, configured with valid credentials.
Run the chatflow.
Observe the resulting trace in the corresponding platform (Langfuse or LangSmith). The trace appears, but all generation steps show $0.00 cost and 0 tokens used.
Expected behavior
Expected Behavior The observability platform (Langfuse/LangSmith) should correctly display the token counts (input_tokens, output_tokens) and the calculated cost for each generation step within the trace.
Crucial Findings & Debugging Journey This is not a simple configuration error. After extensive debugging, we have confirmed the following:
Flowise DOES generate the usage data internally. By inspecting the output of the LLM nodes within the Flowise UI, we can see the usageMetadata object is present and contains the correct token counts. Example from a node's output:
JSON
"usageMetadata": { "output_tokens": 25, "input_tokens": 295, "total_tokens": 320 } This proves the issue is not in the communication with the LLM provider, but in the communication from Flowise to the observability handler.
The issue is platform-agnostic. The exact same failure occurs with both Langfuse and LangSmith, which strongly suggests the bug is within Flowise's callback/handler mechanism itself.
The issue is environment-agnostic. This failure has been reproduced in multiple environments:
A self-hosted Docker deployment.
The official Flowise Cloud service. This rules out environment-specific configuration issues like incorrect environment variables.
A manual workaround is not feasible. An attempt to bypass the handler with a Custom JS Function to make a direct HTTP POST request to the Langfuse API failed because the global traceId variable (e.g., {{$flow.traceId}}) is not accessible within the function's scope, making it impossible to associate the manual event with the correct trace.
Environment Flowise Version: 3.0.3
Deployment Method: Reproduced on both self-hosted Docker and the official Flowise Cloud.
Node(s) Used: ConditionAgent, Agent, LLM, ChatOpenAI.
Use Method
agentflow
This is a good issue. I hope it gets resolved soon.
+1 for visibility
+1
+1
+1
+1
same issue encountered
+1
+1 Any news?
+1
+1
+1
+1
+1