Reached heap limit Allocation failed - JavaScript heap out of memory
Link to the code that reproduces this issue
https://github.com/NeoSahadeo/Avg-NextJS-experience
To Reproduce
- run pnpm run dev or pnpm next dev
Current vs. Expected behavior
Expected:
<--- Last few GCs --->
[3314:0xde692d0] 178177 ms: Scavenge 2024.1 (2060.6) -> 2016.6 (2060.6) MB, 4.90 / 0.00 ms (average mu = 0.692, current mu = 0.515) allocation failure;
[3314:0xde692d0] 178198 ms: Scavenge 2024.3 (2060.6) -> 2016.7 (2060.8) MB, 4.67 / 0.00 ms (average mu = 0.692, current mu = 0.515) allocation failure;
[3314:0xde692d0] 181163 ms: Mark-Compact 2617.4 (2653.9) -> 2214.5 (2274.8) MB, 2633.36 / 0.00 ms (average mu = 0.565, current mu = 0.404) allocation failure; scavenge might not succeed
<--- JS stacktrace --->
FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
----- Native stack trace -----
1: 0xb8cf03 node::OOMErrorHandler(char const*, v8::OOMDetails const&) [node]
2: 0xf04610 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [node]
3: 0xf048f7 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [node]
4: 0x1116545 [node]
5: 0x112e3c8 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [node]
6: 0x11044e1 v8::internal::HeapAllocator::AllocateRawWithLightRetrySlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [node]
7: 0x1105675 v8::internal::HeapAllocator::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [node]
8: 0x10e2cc6 v8::internal::Factory::NewFillerObject(int, v8::internal::AllocationAlignment, v8::internal::AllocationType, v8::internal::AllocationOrigin) [node]
9: 0x153e806 v8::internal::Runtime_AllocateInYoungGeneration(int, unsigned long*, v8::internal::Isolate*) [node]
10: 0x7f0d36e99ef6
<--- Last few GCs --->
[4029:0x19bec2d0] 182670 ms: Scavenge 2017.4 (2046.8) -> 2016.5 (2047.1) MB, 1.96 / 0.00 ms (average mu = 0.603, current mu = 0.443) allocation failure;
[4029:0x19bec2d0] 182676 ms: Scavenge 2017.5 (2047.1) -> 2016.5 (2047.1) MB, 5.03 / 0.00 ms (average mu = 0.603, current mu = 0.443) allocation failure;
[4029:0x19bec2d0] 184729 ms: Mark-Compact 2610.5 (2640.1) -> 2214.1 (2244.8) MB, 1737.89 / 0.00 ms (average mu = 0.444, current mu = 0.171) allocation failure; scavenge might not succeed
<--- JS stacktrace --->
FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
----- Native stack trace -----
1: 0xb8cf03 node::OOMErrorHandler(char const*, v8::OOMDetails const&) [node]
2: 0xf04610 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [node]
3: 0xf048f7 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [node]
4: 0x1116545 [node]
5: 0x112e3c8 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [node]
6: 0x11044e1 v8::internal::HeapAllocator::AllocateRawWithLightRetrySlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [node]
7: 0x1105675 v8::internal::HeapAllocator::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [node]
8: 0x10e220a v8::internal::Factory::AllocateRawWithAllocationSite(v8::internal::Handle<v8::internal::Map>, v8::internal::AllocationType, v8::internal::Handle<v8::internal::AllocationSite>) [node]
9: 0x10ef6a8 v8::internal::Factory::NewJSObjectFromMap(v8::internal::Handle<v8::internal::Map>, v8::internal::AllocationType, v8::internal::Handle<v8::internal::AllocationSite>) [node]
10: 0x134b2bf v8::internal::JSObject::New(v8::internal::Handle<v8::internal::JSFunction>, v8::internal::Handle<v8::internal::JSReceiver>, v8::internal::Handle<v8::internal::AllocationSite>) [node]
11: 0x1358546 v8::internal::JSDate::New(v8::internal::Handle<v8::internal::JSFunction>, v8::internal::Handle<v8::internal::JSReceiver>, double) [node]
12: 0xf8b430 v8::internal::Builtin_DateConstructor(int, unsigned long*, v8::internal::Isolate*) [node]
13: 0x1977df6 [node]
βERR_PNPM_RECURSIVE_EXEC_FIRST_FAILβ Command was killed with SIGABRT (Aborted): next dev
Provide environment information
next info crashes.
System info:
Host: 82H8 IdeaPad 3 15ITL6
Kernel: 6.15.9-zen1-1-zen
Uptime: 12 mins
Packages: 2352 (pacman), 48 (flatpak)
Shell: bash 5.3.3
Resolution: 1920x1080
DE: Hyprland
Terminal: alacritty
CPU: 11th Gen Intel i3-1115G4 (4) @ 4.100GHz
GPU: Intel Tiger Lake-LP GT2 [UHD Graphics G4]
Memory: 4144MiB / 11738MiB
Binaries:
node version: v20.19.2 (I tried using 18.18 with the same result)
pnpm version: 10.14.0
npm version: 10.8.2
Which area(s) are affected? (Select all that apply)
Runtime
Which stage(s) are affected? (Select all that apply)
next dev (local)
Additional context
I believe that I used Nextjs roughly a month ago to setup a project and it ran just fine. Unfortunately I do not have the project with me. next v14.2.31 works just fine.
Does not work: [email protected] [email protected] [email protected] [email protected] [email protected]
Does work: [email protected]
hi, have you tried with another package manager, or different version of pnpm? just in case?
It runs without problem here, https://stackblitz.com/github/NeoSahadeo/Avg-NextJS-experience - π€
I tried npm and deno both have the same outcome.
Let me know if there is another pm I should try.
Really strange, a project that small should just run, as the stackblitz shows -
Maybe give pnpm next dev --turbo a go too -
Got more info you could share about the error stack though?
Hi, are you using a Dockerfile to build/develop? Make sure you are not copying to the root of the image file system.
I tried pnpm next dev --turbo but it didn't change anything.
Hi, are you using a Dockerfile to build/develop? Make sure you are not copying to the root of the image file system.
All running on my base system working from my home directory.
Appreciate all the help. It might just be a "my system" issue.
Not sure about that, I have made a PR to try and fix the issue I found in another report, if that gets merged, we can try it out against your system. Also 15.4.6 did have some OOM fixes (I read in another thread), might be a worth giving a try too.
The PR got merged, you should be able to try with latest canary to see if it helps :)
unfortunately it still crashes for me (screen shot notification is hiding it but it shows 2gigs of mem usage and generally no cpu usage)
<--- Last few GCs --->
[633032:0x55b669cc5000] 142926 ms: Scavenge (interleaved) 2019.3 (2050.0) -> 2017.4 (2050.0) MB, pooled: 0 MB, 3.31 / 0.00 ms (average mu = 0.545, current mu = 0.392) allocation failure;
[633032:0x55b669cc5000] 147028 ms: Mark-Compact 2612.4 (2643.0) -> 2215.0 (2247.7) MB, pooled: 0 MB, 3540.51 / 0.00 ms (average mu = 0.353, current mu = 0.178) allocation failure; scavenge might not succeed
<--- JS stacktrace --->
FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
----- Native stack trace -----
1: 0x55b636d9e666 node::OOMErrorHandler(char const*, v8::OOMDetails const&) [node]
2: 0x55b63728a274 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [node]
3: 0x55b63728a56d v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [node]
4: 0x55b6374bcf2c [node]
5: 0x55b6374d7f06 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [node]
6: 0x55b6374b1000 v8::internal::HeapAllocator::AllocateRawWithLightRetrySlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [node]
7: 0x55b6374b15d8 v8::internal::HeapAllocator::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [node]
8: 0x55b63749150a v8::internal::Factory::NewFillerObject(int, v8::internal::AllocationAlignment, v8::internal::AllocationType, v8::internal::AllocationOrigin) [node]
9: 0x55b6378e2d84 v8::internal::Runtime_AllocateInYoungGeneration(int, unsigned long*, v8::internal::Isolate*) [node]
10: 0x55b637e4d476 [node]
I have the same issue, but only on next start and after hitting the site a few times. Seems to be isolated to server environment too. If I do next start on my dev box and hammer it with autocannon, it is ok.
<--- Last few GCs --->
[32:0x1041ed50] 250942 ms: Scavenge 2043.5 (2082.8) -> 2041.9 (2083.0) MB, 6.8 / 0.5 ms (average mu = 0.172, current mu = 0.133) allocation failure;
[32:0x1041ed50] 250967 ms: Scavenge 2043.7 (2083.0) -> 2042.1 (2083.3) MB, 6.2 / 0.7 ms (average mu = 0.172, current mu = 0.133) allocation failure;
[32:0x1041ed50] 251002 ms: Scavenge 2043.9 (2083.3) -> 2042.3 (2087.5) MB, 8.8 / 1.0 ms (average mu = 0.172, current mu = 0.133) allocation failure;
<--- JS stacktrace --->
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
1: 0xadef78 node::Abort() [next-server (v15.2.3)]
2: 0x99b5e2 [next-server (v15.2.3)]
3: 0xcfe700 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [next-server (v15.2.3)]
4: 0xcfead9 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [next-server (v15.2.3)]
5: 0xefcdd5 [next-server (v15.2.3)]
6: 0xefcec1 [next-server (v15.2.3)]
7: 0xf1160a v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::internal::GarbageCollectionReason, char const*, v8::GCCallbackFlags) [next-server (v15.2.3)]
8: 0xf121ce v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [next-server (v15.2.3)]
9: 0xf132ac v8::internal::Heap::CollectAllGarbage(int, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [next-server (v15.2.3)]
10: 0xe90e68 v8::internal::StackGuard::HandleInterrupts() [next-server (v15.2.3)]
11: 0x12ec589 v8::internal::Runtime_StackGuardWithGap(int, unsigned long*, v8::internal::Isolate*) [next-server (v15.2.3)]
Are you on a Linux system? If so which kernel are you using?
Got a friend running Win11 on the same project and it works fine for him. The only diff I can think of is kernels
It'd be interesting to narrow down -
Perhaps unrelated to this case, but someone also identified a memory leak with fetch - https://github.com/vercel/next.js/pull/82678
Unlikely related as you can't even start your server. The other user does report it happening when using the server, though the version is not related to the memory leak above, right?
The issue is relate to something that changed from v15.4.1 and later.
I mean, we are here now - might as well try to binary search the canary that did it - there's 130 canaries, we can resolve this in about 7-8 steps? start at 65, pnpm i [email protected] - and jump forward or backward from there. Thanks in advance π
You'll end up in a situation where [email protected] works, but [email protected]+1 doesn't
It might make things complicated, but I'm on 15.2.3 and this project had previously been working, which obviously leads me to look at the host. This is running in a docker container:
Runtime environment (inside container)
OS: Ubuntu 22.04.5 LTS (Jammy) β glibc 2.35
Kernel (host): 6.8.0-71-generic (containers share host kernel)
Node: v18.20.5 (V8 10.2.154.26)
Next.js: 15.2.3
Arch: x86_64
Package manager: pnpm 9
Started my search now, I'll let you know how it goes
Ok, here's what I have so far.
works: 15.4.0-canary.60 --yes 15.4.0-canary.95 --no 15.4.0-canary.77 --no 15.4.0-canary.68 --no 15.4.0-canary.64 --yes 15.4.0-canary.66 --yes 15.4.0-canary.67 --yes 15.4.0-canary.68 --no
Looks like something in canary 68 for v15.4.0 stops the nextjs server from running on my machine. Updated my kernel versions a @1jmj is running a standard kernel in docker so I think I'd rule out a kernel issue.
I'll have a look at the 68 canary patch
I started looking through the diffs. There's quite a few to go through and I don't have much experience with rust so feel free to jump in.
Commit:
- 67
caa54e46b0ff96077529b037f294a1ab1091890c - 68
94c3927ba28a01d8e8efb55147789f087a8f3488
Edit by maintainer: Replace file diff for compare link - https://github.com/vercel/next.js/compare/v15.4.0-canary.67...v15.4.0-canary.68
HI, so - let's try this one trick, replace the next version in package.json with https://vercel-packages.vercel.app/next/commits/<commit-hash>/next
I'd say, first verify that the first commit hash - which happens directly after 67 works - then try the very last one in the list, which lands canary 68, it should fail.
If I had to ballpark a guess, I'd try:
- Should work, e9f70d0fdb67f9a024c84a2a542684c72be8e12c
- Perhaps... 46fa52c54641a94b9ae9fbaebdb9346b2816c3af would fail? swc_core update
- Should work, dd958a13ab4a5f972be4b25be20030958d338346
- Perhaps... 1429d7242159be90ecb2423f034770cb251a0e54 would fail? Rust update
But other than that, binary search once again π - let me know if it works to search like this.
Monkey Patch
Well, I fixed the crashing issue by patching a few types. It seems to be causes by some sort of error because of new React.
Steps to Patch:
- Update mdx-js,
- Update jsdom
- I changed the types of
node_modules/@types/jsdom/base.d.ts - Manually changed occurrences of
JSXinnode_modules/@types/mdx/index.d.tstoReact.JSX - Fixed
ESLint-plugin-next/src/no-duplicate-head.tstype/logic issues. There were 2 that caused crashes and I assume in the prod release it fails silently while the server is still up.
Issues
- It still seems to halt/freeze my computer, closing the program leaks memory and does not fully stop the program.
Commit -> https://github.com/NeoSahadeo/next.js-15.4.0-canary-68-temp-patch/commit/5017f6088a06d32c612cad92888372203166ed6b
Repo -> https://github.com/NeoSahadeo/next.js-15.4.0-canary-68-temp-patch
HI, so - let's try this one trick, replace the
nextversion inpackage.jsonwithhttps://vercel-packages.vercel.app/next/commits/<commit-hash>/nextI'd say, first verify that the first commit hash - which happens directly after 67 works - then try the very last one in the list, which lands canary 68, it should fail.
If I had to ballpark a guess, I'd try:
* Should work, [e9f70d0](https://github.com/vercel/next.js/commit/e9f70d0fdb67f9a024c84a2a542684c72be8e12c) * Perhaps... [46fa52c](https://github.com/vercel/next.js/commit/46fa52c54641a94b9ae9fbaebdb9346b2816c3af) would fail? swc_core update * Should work, [dd958a1](https://github.com/vercel/next.js/commit/dd958a13ab4a5f972be4b25be20030958d338346) * Perhaps... [1429d72](https://github.com/vercel/next.js/commit/1429d7242159be90ecb2423f034770cb251a0e54) would fail? Rust updateBut other than that, binary search once again π - let me know if it works to search like this.
Super strange:
e9f70d0 - works
46fa52c - works
dd958a1 - works
1429d72 - works
The only errors are type errors and nothing crashes
Aha, but ok that was just guessing on my side - so you can binary search from 1429d72 onwards - we are closing on the commit that introduced the issue. https://github.com/vercel/next.js/compare/v15.4.0-canary.67...v15.4.0-canary.68
I've up'd to 15.5.2 and problem is happening on localhost and servers now. It seems to be whenever my app needs data, but DB is next to idle
Started testing canary 67..68.
A super weird note, updated to [email protected] and it works perfectly.
Then downgraded back to [email protected] and the problem appeared again.
I'll finish up testing the commits but the issue it "technically" solved in the latest release
@NeoSahadeo nice! would be nice to know what happened though - I guess 15.5.0 did the trick, the other two have not had any changes that would be related to this.
Got the commit where the crash is introduced; ef888e8
What is tested:
a094280 -- ok dd958a1 -- ok 2bb8978 -- ok ef888e8 -- no 56c3dce -- ok 7a66008 -- ok
Since the release of Next.js 16, this has been happening far more often, causing the dev server to crash due to heap memory issues...
Can't navigate two routes in development mode without this issue occurring. From what I could collect from top: The allocated memory just keeps growing exponentially until the server crashes. Using the following setup:
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 25.1.0: Mon Oct 20 19:34:05 PDT 2025; root:xnu-12377.41.6~2/RELEASE_ARM64_T6041
Available memory (MB): 49152
Available CPU cores: 14
Binaries:
Node: 24.11.0
npm: 11.6.1
Yarn: N/A
pnpm: 10.22.0
Relevant Packages:
next: 16.0.3 // Latest available version is detected (16.0.3).
eslint-config-next: N/A
react: 19.2.0
react-dom: 19.2.0
typescript: 5.9.3
Next.js Config:
output: N/A
This must be a high priority issue, right? It not only impacts DX but makes it damn near impossible to work. Has anyone found anything that could help, solutions or workarounds?