Title: graphiti-mcp-server Docker container hangs at 100% CPU when processing updates/corrections via add_episode
Description The graphiti-mcp-server, when run via the provided docker-compose.yml, consistently enters an unresponsive state with 100% CPU utilization after processing an add_episode call that updates or corrects information about a recently added entity. Adding completely new, unrelated information seems to work fine, but modifying existing facts reliably triggers the hang.
To Reproduce Steps to reproduce the behavior:
- Start the graphiti-mcp-server using docker compose -f mcp_server/docker-compose.yml up -d (using the files from the getzep/graphiti repo).
- Use the add_episode tool to add an initial fact about an entity (e.g., add_episode(name="Pet Info", episode_body="My dog Mambo has nickname 'Danger'.")). Wait for it to process (monitor logs or attempt a search).
- Use the add_episode tool again shortly after to correct or update the previous fact (e.g., add_episode(name="Pet Info Correction", episode_body="Update: Mambo's nickname is actually 'Lightning'.")).
- Observe the graphiti-mcp container's CPU usage (e.g., using docker stats). It will likely spike to 100%.
- Attempt subsequent tool calls (e.g., search_facts). They will likely hang or time out as the server is unresponsive.
Expected behavior The add_episode call for the update should be processed successfully, the new fact should supersede the old one (or be added alongside it), and the server should remain responsive with normal CPU usage.
Actual behavior The update add_episode call might return a success message (indicating the episode is queued), but the background processing triggered by this update causes the graphiti-mcp process to enter an infinite loop or hang, consuming 100% CPU and becoming unresponsive.
Troubleshooting Steps Taken: Restarting: docker compose down && docker compose up -d temporarily resolves the issue until the next update attempt. Delay: Introducing a significant delay (minutes) between the initial add_episode and the update add_episode did not prevent the hang. Logging: Enabling DEBUG logging in graphiti_mcp_server.py showed increased Neo4j activity (MATCH queries, COMMITs) after the update call was queued, but did not reveal an obvious infinite loop pattern or specific error message before the server became unresponsive. The hang seems to occur during the background processing phase.
Hi @ralph-burleson, can you share more about your setup?
- Machine the Graphiti MCP server is being run on (MacBook M-series, Intel/AMD64, other?)
- OS (Linux, Windows, MacOS)
- How many CPU cores and how much memory are allocated to Docker containers (if running on MacOS)
EDIT:
If you're using Docker Desktop on a Mac, please ensure that you have allocated at least 2 CPU and 2GB of Memory to Docker for containers (see the Resources config).
Additionally, ensure that you've allocated adequate storage.
Let us know if this helps. Thanks!
Hey team, adding some details to the report above. I can repro the freeze every single time I tweak an already‑added episode.
Quick repro (100 % for me)
- Spin the stack:
docker compose -f mcp_server/docker-compose.yml up -d - Add a new fact – works fine:
add_episode( name="Pet Info", episode_body="My dog **Mambo** has nickname *'Danger'*." ) - A minute later, correct that fact:
add_episode( name="Pet Info Correction", episode_body="Update: Mambo's nickname is actually *'Lightning'*." ) - Watch
docker stats→ graphiti-mcp jumps to 100 % CPU. - Hit
curl --max-time 60 http://localhost:8000/sse→ hangs.
New, unrelated episodes work; updates kill the server every time.
What should happen
- Episode gets updated.
- SSE keeps streaming.
- CPU stays normal.
What actually happens
- Update call looks accepted, but…
- Background worker pegs one core (Python at 100 %).
-
/ssenever responds and all later tool calls stall.
My rig
Host
MacBook Pro 18,1 – Apple M1 Pro
10 cores (8P + 2E) • 32 GB RAM • macOS firmware 11881.101.1
Docker VM
Docker Desktop 27.4.0 (Linux VM, aarch64)
Kernel 6.10.14-linuxkit • 10 vCPUs • 15.6 GiB RAM
Storage driver: overlay2 • cgroup v2
Container snapshot (just before the freeze)
$ docker compose ps
NAME STATUS PORTS
mcp_server-graphiti-mcp-1 Up 53s 0.0.0.0:8000->8000/tcp
mcp_server-neo4j-1 Up (healthy) 1m 7474/tcp,7687/tcp
$ docker compose top graphiti-mcp
UID PID PPID C CMD
root 50376 50338 81 /app/.venv/bin/python3 graphiti_mcp_server.py …
Inside the container:
top - 08:41:42 … load avg: 2.3 1.7 0.9
PID %CPU %MEM COMMAND
10 100.3 0.9 python3
Tail end of the logs
08:37:46,520 INFO Episode 'Backend Authentication - User Functionality' added successfully
08:37:46,520 INFO Building communities after episode …
A dozen OpenAI chat/embedding calls precede the stall.
Things I tried (no luck)
- Restart – fixes it until the next edit.
- Delay – waiting minutes between the two calls still hangs.
- DEBUG logging – shows heavy Neo4j traffic but no tracebacks.
Looks like related to the https://github.com/getzep/graphiti/issues/290
having the same issue with being stuck on "building communities after episode..."
+1. Also having this issue
Would y'all please share what OS you're running on? If MacOS, how you've configured resources for the Docker VM (memory, CPU, storage).
I'm running docker on an ubuntu machine. Heres information on my environment:
### OS & Kernel
Distributor ID: Ubuntu
Description: Ubuntu 22.04.5 LTS
Release: 22.04
Codename: jammy
Linux 6.8.0-57-generic x86_64
### CPU
Model name: 12th Gen Intel(R) Core(TM) i9-12900K
Thread(s) per core: 1
Core(s) per socket: 22
Socket(s): 1
### Memory
total used free shared buff/cache available
Mem: 94Gi 29Gi 5.3Gi 763Mi 59Gi 62Gi
Swap: 8.0Gi 8.0Gi 0.0Ki
MemTotal: 98872780 kB
### Storage
NAME SIZE TYPE MOUNTPOINT
sda 528K disk
sdb 528K disk
sdc 2.4T disk
├─sdc1 1M part
└─sdc2 2.4T part /
sdd 5G disk
└─sdd1 5G part /boot
sr0 1024M rom
Filesystem Type Size Used Avail Use% Mounted on
/dev/sdc2 ext4 2.5T 1.7T 635G 73% /
### Docker daemon
28.0.2 on Ubuntu 22.04.5 LTS
Same issue here (I am using Elest.io service hosted on hetzner) 4 vCPU, 8GB RAM, just Graphiti mcp server and neo4j on that docker
Processing hangs on building communities - not sure yet what is going on, but as far as I see, only one core is being used (100%) with ocassional spikes on other cores.
I am not sure if that was the same issue (for me it was happening on every add_episde). But what helped:
- Upgrading Neo4j to 4.26.2 -> this enforced stronger schema policy, brought some warnings and then I tried ->
- Creating some additional elements in database before starting exploatation (my sh file below)
- Afte some try and error I switched entit_name_embedding_index to FULLTEXT - stressing this one, because Graphiti was not processing requests without this one
Still testing if it fully works, but I see that this this not only my issue, so decided to share (I am sitting on this issue for 2 days now)
`#!/usr/bin/env bash set -euo pipefail
set -o allexport source .env set +o allexport
echo "Creating Neo4j indexes…"
CONTAINER=$(docker-compose ps -q neo4j) if [[ -z "$CONTAINER" ]]; then echo "❌ neo4j container not found" exit 1 fi
docker exec -i "$CONTAINER" cypher-shell
-u "${NEO4J_USER}" -p "${NEO4J_PASSWORD}" <<'EOF'
CREATE INDEX graph_index IF NOT EXISTS FOR (n:GraphNode) ON (n.uuid);
CREATE INDEX edge_index IF NOT EXISTS FOR ()-[r:CONNECTED]->() ON (r.uuid);
CREATE INDEX entity_uuid_index IF NOT EXISTS FOR (n:Entity) ON (n.uuid); CREATE INDEX entity_name_index IF NOT EXISTS FOR (n:Entity) ON (n.name); CREATE INDEX entity_group_index IF NOT EXISTS FOR (n:Entity) ON (n.group_id); CREATE FULLTEXT INDEX entity_name_embedding_index IF NOT EXISTS FOR (n:Entity) ON EACH [n.name_embedding]; CREATE INDEX entity_summary_index IF NOT EXISTS FOR (n:Entity) ON (n.summary);
CREATE INDEX relation_uuid_index IF NOT EXISTS FOR ()-[r:RELATES_TO]->() ON (r.uuid); CREATE INDEX relation_fact_index IF NOT EXISTS FOR ()-[r:RELATES_TO]->() ON (r.fact_embedding); CREATE INDEX relation_group_index IF NOT EXISTS FOR ()-[r:RELATES_TO]->() ON (r.group_id); CREATE INDEX relation_episodes_index IF NOT EXISTS FOR ()-[r:RELATES_TO]->() ON (r.episodes);
CREATE FULLTEXT INDEX node_name_and_summary IF NOT EXISTS FOR (n:Entity) ON EACH [n.name, n.summary, n.name_embedding]; EOF
echo "✅ Indexes created successfully." `
P.S. tested in on updates as well - works - so far so good
I've got the same problem on multiple systems, including Windows 11 pro and Ubuntu server. On Windows it's seams to run a little longer, before it's get unstable. The things @wariatus found seem to help, and I was able to improve stability for a little. But I wasn't able to keep something running stable for more than a few calls yet.
I am using mostly mcp_server - so the usage might be different than for someone using Graphiti directly. But my observation is - I have to wait till build_communities is processed before calling another add_episode. It is painful, because with every addition to dataset it takes longer.
I plan to debug build_communities tomorrow.
I really like this idea and I am happy someone spend time to create an mcp_server based on knowledge graph with proper math behind it ;)
Have this same issue too still. I will probably give up on graphiti for now until the issue is resolved. An alternative that also has a third party mcp is LightRAG, does everything except bitemporal edges but probably could be implemented with a bit of work.
Replicated with community building. Occurs only in the container deployment. Removed in #512 .
Needs further investigation if communities are to be used via MCP.
Closing as #512 has been merged. If this issue impacted you, please pull from main and rebuild your container.