langchainjs icon indicating copy to clipboard operation
langchainjs copied to clipboard

streamEvents doesn't return response_metadata

Open Godrules500 opened this issue 7 months ago • 3 comments

I'm not sure if this is a bug or my ignorance, but I have this code below where I'm using streamEvents and no matter what I do, response_metadata is always empty. If I do .stream() I can get it, but I'm struggling to still have the response stream to the front (may be a me issue). Is this a bug or is there a way to grab this info?

Essentially what I need to do is stream the data to the client end, and capture once it finishes to inspect things like errors or the finish reason.

const llm = new ChatVertexAI({
      model: 'gemini-2.0-flash-001',
      temperature: 0,
      authOptions: {
        credentials: vertexProviderSettings.googleAuthOptions.credentials,
        projectId: vertexProviderSettings.project!
      },
      location: vertexProviderSettings.location
    })
const agent = createReactAgent({
      llm,
      tools: [],
      /**
       * Modify the stock prompt in the prebuilt agent. See docs
       * for how to customize your agent:
       *
       * https://langchain-ai.github.io/langgraphjs/tutorials/quickstart/
       */
      messageModifier: new SystemMessage(AGENT_SYSTEM_TEMPLATE)
    })
    const eventStream = agent.streamEvents({ messages: body.messages }, { version: 'v2' })

Godrules500 avatar Jun 10 '25 20:06 Godrules500

Just to add, this is the python docs and this does not work with JavaScript version Screenshot_20250630_175837_Chrome.jpg

Godrules500 avatar Jun 30 '25 22:06 Godrules500

Hello! I believe this to be a LangChain.js issue, since other LLM providers do output response metadata.

Transferring to LangChain.js (cc @afirstenberg)

dqbd avatar Jul 08 '25 09:07 dqbd

@Godrules500 Can you try adding this?

const eventStream = agent.streamEvents(
  { messages: body.messages },
  { 
    stream_options: {
      include_usage: true
    }, 
    version: 'v2' 
  }
);

rrutwik avatar Sep 06 '25 08:09 rrutwik

Investigation Findings

I investigated this issue and found that the core LangChain.js streamEvents implementation correctly handles response_metadata.

How it works:

  1. In _streamIterator() (chat_models.ts), chunk.generationInfo is merged into chunk.message.response_metadata
  2. AIMessageChunk.concat() properly merges response_metadata via mergeResponseMetadata()
  3. The on_chat_model_end event in streamEvents includes the full message with response_metadata

Test verification:

I wrote a test that confirms response_metadata (including finish_reason, model_name, usage, etc.) flows through streamEvents correctly. See PR #9589.

Possible causes for your issue:

  1. Looking at the wrong event: response_metadata is available in the on_chat_model_end event, not in intermediate on_chat_model_stream events. Make sure you're checking: ```javascript for await (const event of eventStream) { if (event.event === 'on_chat_model_end') { console.log(event.data.output.response_metadata); } } ```

  2. LangGraph's createReactAgent: Since you're using createReactAgent from LangGraph, there may be additional layers that affect how events are surfaced. The agent wraps the LLM, so you might need to look for nested events.

  3. Vertex AI provider behavior: The Google Vertex AI provider does set response_metadata, but it may have different content than OpenAI (e.g., finish_reason is in generationInfo which gets merged into response_metadata).

Could you try:

  1. Checking the on_chat_model_end event specifically for response_metadata
  2. Using the LLM directly (without the agent) to verify if response_metadata appears
  3. Logging all events to see what's available

Hope this helps! Let me know if you're still seeing issues after checking these.

nathannewyen avatar Dec 08 '25 07:12 nathannewyen

@nathannewyen thanks for contributing a test for this. Will go ahead and close.

christian-bromann avatar Dec 13 '25 00:12 christian-bromann