Flowise icon indicating copy to clipboard operation
Flowise copied to clipboard

Langfuse & LangSmith integration fails to track cost and usage in agentic flows

Open DaniRod9 opened this issue 6 months ago • 14 comments

Describe the bug

Describe the bug The integration with observability platforms (both Langfuse and LangSmith) is not working correctly for cost and token usage tracking when using agentic flows (e.g., AgentFlow, ConditionAgent).

While traces are successfully sent and appear in the dashboards of both platforms, all cost and token usage metrics remain at $0.00. The model name and token counts are not being registered for any of the LLM generation steps.

To Reproduce

Steps to reproduce the behavior:

Create a chatflow using an AgentFlow structure, for example, a ConditionAgent that routes to different LLM or Agent nodes.

Configure the LLM/Agent nodes to use a standard OpenAI model, such as gpt-4o-mini.

Critically, ensure the "Streaming" option is turned OFF for all these nodes.

Add either the Langfuse or LangSmith handler to the chatflow settings, configured with valid credentials.

Run the chatflow.

Observe the resulting trace in the corresponding platform (Langfuse or LangSmith). The trace appears, but all generation steps show $0.00 cost and 0 tokens used.

Expected behavior

Expected Behavior The observability platform (Langfuse/LangSmith) should correctly display the token counts (input_tokens, output_tokens) and the calculated cost for each generation step within the trace.

Crucial Findings & Debugging Journey This is not a simple configuration error. After extensive debugging, we have confirmed the following:

Flowise DOES generate the usage data internally. By inspecting the output of the LLM nodes within the Flowise UI, we can see the usageMetadata object is present and contains the correct token counts. Example from a node's output:

JSON

"usageMetadata": { "output_tokens": 25, "input_tokens": 295, "total_tokens": 320 } This proves the issue is not in the communication with the LLM provider, but in the communication from Flowise to the observability handler.

The issue is platform-agnostic. The exact same failure occurs with both Langfuse and LangSmith, which strongly suggests the bug is within Flowise's callback/handler mechanism itself.

The issue is environment-agnostic. This failure has been reproduced in multiple environments:

A self-hosted Docker deployment.

The official Flowise Cloud service. This rules out environment-specific configuration issues like incorrect environment variables.

A manual workaround is not feasible. An attempt to bypass the handler with a Custom JS Function to make a direct HTTP POST request to the Langfuse API failed because the global traceId variable (e.g., {{$flow.traceId}}) is not accessible within the function's scope, making it impossible to associate the manual event with the correct trace.

Environment Flowise Version: 3.0.3

Deployment Method: Reproduced on both self-hosted Docker and the official Flowise Cloud.

Node(s) Used: ConditionAgent, Agent, LLM, ChatOpenAI.

Use Method

agentflow

DaniRod9 avatar Jul 02 '25 14:07 DaniRod9

This is a good issue. I hope it gets resolved soon.

jemlog avatar Jul 03 '25 05:07 jemlog

+1 for visibility

x3mxray avatar Sep 02 '25 06:09 x3mxray

+1

sandeep-birdeye avatar Sep 02 '25 07:09 sandeep-birdeye

+1

doraig avatar Sep 09 '25 12:09 doraig

+1

iaminawe avatar Sep 09 '25 17:09 iaminawe

+1

igolus avatar Sep 10 '25 08:09 igolus

same issue encountered

howardtokka avatar Oct 06 '25 04:10 howardtokka

+1

LorenzoBoffa avatar Oct 15 '25 07:10 LorenzoBoffa

+1 Any news?

psanchezmolina avatar Oct 15 '25 20:10 psanchezmolina

+1

alexcinergy avatar Oct 15 '25 22:10 alexcinergy

+1

robsonfveiga avatar Nov 10 '25 16:11 robsonfveiga

+1

dimovdaniel avatar Nov 11 '25 06:11 dimovdaniel

+1

LarsEbt avatar Nov 12 '25 14:11 LarsEbt

+1

maximusbirkefeld avatar Nov 12 '25 14:11 maximusbirkefeld