High memory usage in development (10GB+)
Link to the code that reproduces this issue
https://github.com/diogomartino/next-app-router-playground
To Reproduce
- Start the app in dev mode
- Open every single route you have available
- Memory usage keeps rising like crazy. The more routes you have, the more the RAM rises.
Current vs. Expected behavior
I expected that NextJS didn't use 10GB.
Provide environment information
Ubuntu 22.04.5 LTS x86_64
Kernel: 6.8.0-57-generic
pnpm 7.33.5
Node v20.13.1
next: 15.2.3
Using turbo.
Which area(s) are affected? (Select all that apply)
Performance
Which stage(s) are affected? (Select all that apply)
next dev (local)
Additional context
So I made a script that fetches every single route for my project and plotted the RAM usage so you can see it increasing. As soon as all routes are fetched at least once, the memory usage stabilizes.
For my project, the next-server starts at like 300 MB and rises all the way up to 9-10 GB. I also tried a template from vercel (https://vercel.com/templates/next.js/app-directory) and the same happens, but it's not so noticeable because the code is much simpler and it has fewer routes. In the template case, it wen't up to like 4GB, which is already too much.
I also tried with the most recent version (15.3.0) and it's the same.
Hi can you try v15.2.5 which contains a SWC fix for a high memory usage issue?
@huozhi same thing. I already tried with 15.3.0 and it's the same.
I am also having the same issue after upgrading to latest version 15.3.0, local dev server keeps crashing due to memory exhaustion.
@diogomartino do you observer the same if you downgrade to 15.2.0 or earlier?
@icyJoseph with 15.2.0 it's the same. Also tested on the latest canary (15.3.1-canary.7)
Do you mind sharing the script you are using to plot and stuff?
I guess, 15.1.0 would yield the same result? Getting a bit worried here ~
@icyJoseph Sure. This is what I'm using to trigger all routes: https://github.com/diogomartino/next-route-trigger. It doesn't work for dynamic routes, only static. The dynamic ones will return 404 so it might not have the desired effect when testing in some projects. In my case I was using to test on mine, which have ~80 static routes.
To plot I'm using https://github.com/astrofrog/psrecord with this script:
#!/bin/bash
NAME=$1
INTERVAL=${2:-1}
if [ -z "$NAME" ]; then
echo "Usage: $0 <process_name> [interval]"
exit 1
fi
echo "π Waiting for process '$NAME' to start..."
# Loop until we find the process
while true; do
PID=$(pgrep -f "$NAME" | head -n 1)
if [ -n "$PID" ]; then
break
fi
sleep 1
done
TIMESTAMP=$(date +%s)
OUTPUT="${TIMESTAMP}.png"
echo "β
Found process '$NAME' (PID: $PID)"
echo "π Monitoring β press Ctrl+C to stop..."
# Clean up nicely on Ctrl+C
trap "echo -e '\nπ Stopping...'; exit" SIGINT
# Run psrecord until interrupted
psrecord $PID --interval $INTERVAL --plot "$OUTPUT"
And ./monitor.sh next-server 0.5
Actually with 15.1.0 it isn't as bad and the routes process 2x faster. Also the CPU usage is a lot lower.
15.3.1-canary.7
15.2.0
15.1.0
Great, then we can use all of this, to try and narrow down a canary between 15.2.x and 15.1.0 ~ I'll see if I can get some time later today, but by all means go ahead
@diogomartino forgot to ask, but, are you using turbopack as well?
@icyJoseph yes, all my env is on the first post π
This problem is not new; I have reported it myself before, but there has never been an adequate response. The high memory consumption has also been impacting our production environments. As an alternative, we decided to abandon Next.js and explore alternative solutions until a definitive solution is implemented.
Hey @diogomartino, thanks for digging into this and adding the scripts.
I did some digging based on the repository + scripts and found this is caused by an chunking optimization we had to rework a bit for Turbopack production builds. It redoes some work between routes now that can be shared. We were already tracking applying that optimization again. We'll prioritize a fix π
We're also going to add benchmarks for many modules in a layout and then having a lot of page.tsx, that's the specific case that reproduces this. Current benchmarks are mostly checking many modules on one page.
Hi @timneutkens, may I know if this impacts the production builds as well? I am facing memory crashes on Next.js 15.3.1 on production. Any pointers?
Are you using Turbopack build alpha?
Are you using Turbopack build alpha?
Yes, I did but reverted it after I started experiencing CPU spikes, but still the CPU load prevails. Sorry, I mentioned as memory crash but later found out that it's CPU spikes.
@timneutkens any news?
Got same issue
Edit by maintainer bot: Comment was automatically minimized because it was considered unhelpful. (If you think this was by mistake, let us know). Please only comment if it adds context to the issue. If you want to express that you have the same problem, use the upvote π on the issue description or subscribe to the issue for updates. Thanks!
Still happening on v15.4.0-canary.97 seems like production build has reached 100% coverage, so congrats π but leaving some issues behind on development, more significantly memory usage and performance specially when having multiple views opened.
Edit by maintainer bot: Comment was automatically minimized because it was considered unhelpful. (If you think this was by mistake, let us know). Please only comment if it adds context to the issue. If you want to express that you have the same problem, use the upvote π on the issue description or subscribe to the issue for updates. Thanks!
@diogomartino this is part of the prioritized list of issues. We'll get to it soon.
Any updates here?
Edit by maintainer bot: Comment was automatically minimized because it was considered unhelpful. (If you think this was by mistake, let us know). Please only comment if it adds context to the issue. If you want to express that you have the same problem, use the upvote π on the issue description or subscribe to the issue for updates. Thanks!
Any news guys? Or itβs a bottleneck?
this issue must be the GOAT of all issues, the NIGHTMARE of Nextjs team
Next 16.1 and the issue is worse than ever, It's not just dev env now, my deployment is eating 4 more gb of RAM just by pushing an update that migrated Next from 15 latest to Next 16.1
π
Mine is even worse it's pretty much unusable at this point. Switching to webpack instead of turbopack seems to reduce it down to like 9GB but is so slow
Any updates on this? π Has anyone been able to find a workaround?
Edit by maintainer bot: Comment was automatically minimized because it was considered unhelpful. (If you think this was by mistake, let us know). Please only comment if it adds context to the issue. If you want to express that you have the same problem, use the upvote π on the issue description or subscribe to the issue for updates. Thanks!
I would've upgraded to 64GB but the current prices aren't helping. Cursor + Next.js have been keeping my 32GB machine at 85%+ usage at all times. A was having issues with a single-page AI SDK chatbot with the chrome tab using 2GB of memory and the dev server the rest of my available memory. I'm not sure if I can blame the chrome tab memory usage on Turbopack or not. The production build takes about 300MB of memory. Anyway. It's probably not related to this issue.
Turbopack just released the experimental bundle analyzer which is very nice. It would also be nice to have a real-time visual memory profiler. There is a command line flag for it that I believe only works with Webpack and even that either runs out of max JS heap or the profiling results are multi-GB files that take hours to load.
If it's possible, having a real-time memory usage chart for dev runtime is a must-have.