langflow icon indicating copy to clipboard operation
langflow copied to clipboard

State Service never releases observers leading to memory growth

Open geoff-va opened this issue 2 months ago • 9 comments

Bug Description

It appears that the InMemoryStateService never unsubscribes observers, leading to ever increasing memory consumption with each message.

For each unique run id, a new observer callback is added for each vertex. Since the update_graph_state callback is bound, references to the vertex instance also appear to be held in memory even when the graph goes out of scope.

There doesn't appear to be any methods in the state manager to unsubscribe observers when the run is finished so unless I'm missing something this is going to grow without bound.

I tried fetching the state service and printing out the len and keys of the observers dict and have found it grows by 2 with every run of the API curl command to /api/v1/run/{name or run id} (based on the reproduction sample provided).

Reproduction

  1. Setup a fresh instance of langflow in a docker container
  2. Add a chat input and chat output component and connect them
  3. Copy the curl api call from the API section
  4. Run this many times observing memory via docker stats. The increase is small with this small model, but larger with larger models/more vertexes

Expected behavior

Observers are correctly cleaned up so references aren't held to the graph vertexes and they get garbage collected, thus memory is not increased with every message sent.

Who can help?

@italojohnny @ogabrielluiz

Operating System

MacOS 15.0

Langflow Version

1.1.1

Python Version

3.10

Screenshot

Here is a screenshot after I added a log statement to count the number of keys in the observers dict and then also log them. You can see the increase by 2 for each run of the Curl command to the /api/v1/run/{} endpoint

Screenshot 2024-12-15 at 11 25 42

Flow File

No response

geoff-va avatar Dec 15 '24 19:12 geoff-va