langchainjs icon indicating copy to clipboard operation
langchainjs copied to clipboard

MaxListenersExceededWarning: Possible EventTarget memory leak detected - not cleaning up AbortSignal listeners

Open jscott-yps opened this issue 9 months ago • 9 comments

MaxListenersExceededWarning: Possible EventTarget memory leak detected. 11 abort listeners added to [AbortSignal]. MaxListeners is 10. Use events.setMaxListeners() to increase limit

I am not entirely sure what is causing this. There was the same issue apparently fixed in langchainjs

https://github.com/langchain-ai/langchainjs/issues/6461

Did it somehow not make it's way into this?

const messages = [{
      role: "user",
      content: data?.message || `Do something`
}];

const config = { configurable: {thread_id: "conversation-num-1"} };

for await (const event of graph.streamEvents(
    { messages },
    { version: "v2", ...config }
)) {
    const kind = event.event;
    console.log(`Event: ${kind}: ${event.name}`);
}
Event: on_chain_start: Branch<agent>
Event: on_chain_end: Branch<agent>
Event: on_chain_end: agent
Event: on_chain_stream: New Agent
Event: on_chain_start: tools
Event: on_tool_start: subtract
Calling subtract with args {"a":5,"b":9}
Event: on_tool_end: subtract

 ERROR  (node:52053) MaxListenersExceededWarning: Possible EventTarget memory leak detected. 11 abort listeners added to [AbortSignal]. MaxListeners is 10. Use events.setMaxListeners() to increase limit

Event: on_chain_start: ChannelWrite<...,tools>
Event: on_chain_end: ChannelWrite<...,tools>
Event: on_chain_end: tools
Event: on_chain_stream: New Agent
Event: on_chain_start: agent
Event: on_chat_model_start: ChatOpenAI

jscott-yps avatar Mar 20 '25 17:03 jscott-yps

Bumping this as it's happening here for me too.

Event: on_chain_start: LangGraph
Event: on_chain_start: __start__
Event: on_chain_start: ChannelWrite<...>
Event: on_chain_end: ChannelWrite<...>
Event: on_chain_start: ChannelWrite<__start__:supervisor>
Event: on_chain_end: ChannelWrite<__start__:supervisor>
Event: on_chain_end: __start__
Event: on_chain_start: supervisor
Event: on_chain_start: RunnableSequence
Event: on_prompt_start: ChatPromptTemplate
Event: on_prompt_end: ChatPromptTemplate
Event: on_chat_model_start: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_end: ChatOpenAI
Event: on_chain_end: RunnableSequence
Event: on_chain_start: ChannelWrite<...,supervisor>
Event: on_chain_end: ChannelWrite<...,supervisor>
Event: on_chain_start: Branch<supervisor>
Event: on_chain_end: Branch<supervisor>
Event: on_chain_end: supervisor
Event: on_chain_stream: LangGraph
Event: on_chain_start: conversationalist
Event: on_chain_start: LangGraph
Event: on_chain_start: __start__
Event: on_chain_start: ChannelWrite<...>
Event: on_chain_end: ChannelWrite<...>
Event: on_chain_start: ChannelWrite<__start__:agent>
Event: on_chain_end: ChannelWrite<__start__:agent>
Event: on_chain_end: __start__
Event: on_chain_start: agent
Event: on_chain_start: RunnableSequence
Event: on_chain_start: prompt
Event: on_chain_end: prompt
Event: on_chat_model_start: ChatOpenAI

(node:13572) MaxListenersExceededWarning: Possible EventTarget memory leak detected. 11 abort listeners added to [AbortSignal]. Use events.setMaxListeners() to increase limit
(Use `node --trace-warnings ...` to show where the warning was created)

Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_end: ChatOpenAI
Event: on_chain_end: RunnableSequence
Event: on_chain_start: ChannelWrite<...,agent>
Event: on_chain_end: ChannelWrite<...,agent>
Event: on_chain_start: Branch<agent,continue,__end__>
Event: on_chain_end: Branch<agent,continue,__end__>
Event: on_chain_end: agent
Event: on_chain_end: LangGraph
Event: on_chain_start: ChannelWrite<...,conversationalist>
Event: on_chain_end: ChannelWrite<...,conversationalist>
Event: on_chain_end: conversationalist
Event: on_chain_stream: LangGraph
Event: on_chain_start: supervisor
Event: on_chain_start: RunnableSequence
Event: on_prompt_start: ChatPromptTemplate
Event: on_prompt_end: ChatPromptTemplate
Event: on_chat_model_start: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_end: ChatOpenAI
Event: on_chain_end: RunnableSequence
Event: on_chain_start: ChannelWrite<...,supervisor>
Event: on_chain_end: ChannelWrite<...,supervisor>
Event: on_chain_start: Branch<supervisor>
Event: on_chain_end: Branch<supervisor>
Event: on_chain_end: supervisor
Event: on_chain_stream: LangGraph
Event: on_chain_start: FINISH
Event: on_chain_start: RunnableSequence
Event: on_chat_model_start: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_end: ChatOpenAI
Event: on_parser_start: StructuredOutputParser
Event: on_parser_end: StructuredOutputParser
Event: on_chain_end: RunnableSequence
Event: on_chain_start: ChannelWrite<...,FINISH>
Event: on_chain_end: ChannelWrite<...,FINISH>
Event: on_chain_end: FINISH
Event: on_chain_stream: LangGraph
Event: on_chain_end: LangGraph

limcolin avatar Mar 22 '25 03:03 limcolin

bumping this as it's happening for me as well. It seems like creating more than ~10 LLM calls at the same time does this.

nikhilshinday avatar Mar 25 '25 15:03 nikhilshinday

Hi @jscott-yps, @limcolin, and @nikhilshinday - thanks for reporting this!

Is this only occurring on streamEvents or are you seeing it when you call invoke or stream? What do you see when you run with --trace-warnings? Note - if you're not executing via node directly you can still pass it via the NODE_OPTIONS env var. Example: NODE_OPTIONS=--trace-warnings

Also if one of you could please share an MRE that would help us to debug this more quickly.

benjamincburns avatar Mar 31 '25 03:03 benjamincburns

Unfortunately I'm unable to reproduce this without an MRE.

I ran a few variations on the test below to try to trigger this warning, but was unable. I ran it with both ChatOpenAI and FakeListChatModel (including the latter version here since it can be executed without API access). I tested on the current main as well as on 0.2.55. Version 0.2.55 was running against @langchain/[email protected] and latest main was running against @langchain/[email protected].

import {describe, expect, it } from "@jest/globals";
import { FakeListChatModel } from "@langchain/core/utils/testing";
import {
  MessagesAnnotation,
  START,
  StateGraph,
} from "@langchain/langgraph";

  it("should not warn about too many AbortSignal event listeners", async () => {
    const llm = new FakeListChatModel({
      sleep: 1,
      responses: Array(500).fill("Why hello!"),
    });
    
    const getResponse = async () => {
      return {
        messages: await llm.invoke([
          {
            role: "user",
            content: "Hi there.",
          },
        ]),
      };
    }

    const builder = new StateGraph<
      typeof MessagesAnnotation["spec"],
      typeof MessagesAnnotation["State"],
      typeof MessagesAnnotation["Update"],
      string
    >(MessagesAnnotation);

    for (let i = 0; i < 50; i += 1) {
      builder.addNode(`node-${i}`, getResponse);
      if (i === 0) {
        builder.addEdge(START, `node-${i}`);
      } else {
        builder.addEdge(`node-${i - 1}`, `node-${i}`);
      }
    }

    const graph = builder.compile();
    
    await graph.invoke({ messages: [] }, { recursionLimit: 51 });
    
    const stream = await graph.stream({ messages: [] }, { recursionLimit: 51 });
    for await (const _ of stream) { }
    
    const streamEvents = graph.streamEvents({ messages: [] }, { version: "v2", recursionLimit: 51 });

    for await (const _ of streamEvents) { }
  });

benjamincburns avatar Mar 31 '25 03:03 benjamincburns

Hey @benjamincburns thanks for looking into this.

It's happening only on StreamEvents for me - the logs show the sequence of events before the warning is displayed, although I can't say for certain if that neccessarily reflects a particular event being the trigger.

From the logs from @jscott-yps it looks like StreamEvents too

limcolin avatar Mar 31 '25 03:03 limcolin

It seems like there’s a potential memory leak in the consumeRunnableStream function in /langchain-core/src/runnables/base.ts.

Specifically, this block:

options.signal.addEventListener(
  "abort",
  () => {
    abortController.abort();
  },
  { once: true }
);

adds an abort listener to the passed options.signal, but never removes it manually.

When Runnable.stream() or streamEvents() is called repeatedly with the same signal (or multiple times in parallel), the listeners accumulate, eventually triggering:

(node:75561) MaxListenersExceededWarning: Possible EventTarget memory leak detected. 11 abort listeners added to [AbortSignal]. MaxListeners is 10. Use events.setMaxListeners() to increase limit
(Use node --trace-warnings ... to show where the warning was created)

In long-running apps or with multiple concurrent streams, this can become problematic.

Although the listener uses { once: true }, it still accumulates if the signal is never triggered, which likely leads to a memory leak over time.

I’m not 100% certain yet, but I suspect this happens because addEventListener is used without a corresponding removeEventListener, especially when calling streamEvents repeatedly in a long-lived app.

Just wanted to flag this in case it’s the root cause — I’ll try confirming with a minimal reproduction soon.

keisokoo avatar Mar 31 '25 06:03 keisokoo

In case it helps, I am only calling streamEvents() once, not multiple times or in parallel. However, this is for a multi-agent (supervisor) workflow, which could possibly be the reason for this.

I've used streamEvents() (also single call) in a single-agent pattern without getting the warning.

limcolin avatar Mar 31 '25 06:03 limcolin

I'm also getting the same warning. I've attached the trace:

(node:4027766) MaxListenersExceededWarning: Possible EventTarget memory leak detected. 11 abort listeners added to [AbortSignal]. Use events.setMaxListeners() to increase limit
    at [kNewListener] (node:internal/event_target:534:17)
    at [kNewListener] (node:internal/abort_controller:239:24)
    at EventTarget.addEventListener (node:internal/event_target:645:23)
    at node_modules/@langchain/core/dist/utils/signal.js:19:20
    at new Promise (<anonymous>)
    at raceWithSignal (node_modules/@langchain/core/dist/utils/signal.js:15:9)
    at RunnableSequence.invoke (node_modules/@langchain/core/dist/runnables/base.js:1274:39)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async RunnableCallable.callModel [as func] (node_modules/@langchain/langgraph/dist/prebuilt/react_agent_executor.js:235:27)
    at async RunnableCallable.invoke (node_modules/@langchain/langgraph/dist/utils.js:79:27)
----

migrad avatar Apr 03 '25 19:04 migrad

Transferred this to the LangChain JS repo, per the trace above.

benjamincburns avatar Apr 04 '25 02:04 benjamincburns

Has the problem been solved?

zhangdongsh avatar May 09 '25 07:05 zhangdongsh

This is still happening

nivo33 avatar Jun 11 '25 06:06 nivo33

anyone got a fix for this?

cristianrdu avatar Jun 13 '25 20:06 cristianrdu

yo this is still happening

andrewdoro avatar Jun 16 '25 08:06 andrewdoro

https://github.com/langchain-ai/langchainjs/pull/7617/commits/c4df9456667abd9563f429bd8c0bd82342d2fca7 the commit that probably broke it

in our case we long running lambdas using Vercel Fluid compute so streamEvents are shared on the same lambda process

triggering this error.

andrewdoro avatar Jun 16 '25 09:06 andrewdoro