bun
bun copied to clipboard
Bun leaks memory in Workers
What version of Bun is running?
1.0.3+f77df12894e7952ea58605432bf9f6d252fff273
What platform is your computer?
Linux 5.15.90.1-microsoft-standard-WSL2 x86_64 x86_64
What steps can reproduce the bug?
Create a main file which starts up a worker, waits until it completes its work, then logs memory usage.
async function run() {
for (let i = 0; i < 100; i++) {
await new Promise((resolve) => {
const worker = new Worker('./worker.ts')
const on_message = () => {
worker.removeEventListener('message', on_message)
resolve(1)
}
worker.addEventListener('message', on_message)
worker.postMessage('start')
})
console.log(process.memoryUsage().rss)
}
}
run()
And a worker which allocates a large amount of memory
function on_message() {
const arrays = []
for (let i = 0; i < 1_000_000; i++) {
arrays.push(new Array(100).fill(i))
}
self.postMessage(arrays.length)
self.removeEventListener('message', on_message)
}
self.addEventListener('message', on_message)
Then run main with bun run main.ts
What is the expected behavior?
Memory usage should spike as the worker begins, then drop sometime afterwards as garbage collection is run. The script should continue running until the loop completes.
When a (functionally) identical script and worker (code below) is run via Node, memory usage stays around the same every call.
import { Worker } from 'worker_threads'
async function run() {
for (let i = 0; i < 100; i++) {
await new Promise((resolve) => {
const worker = new Worker('./node/worker.js')
const on_message = () => {
worker.removeListener('message', on_message)
resolve(1)
}
worker.addListener('message', on_message)
worker.postMessage('start')
})
console.log(i)
}
}
run()
import { parentPort } from 'worker_threads'
function on_message() {
const arrays = []
for (let i = 0; i < 1_000_000; i++) {
arrays.push(new Array(100).fill(i))
}
parentPort.postMessage(arrays.length)
parentPort.removeListener('message', on_message)
}
parentPort.addListener('message', on_message)
Results in node:
> node node/main.js
971173888
983752704
979537920
988639232
986144768
979836928
984797184
986599424
985792512
^C
What do you see instead?
Memory usage continues to rise, eventually crashing (saturates 8gb of RAM and 2gb of swap) at around the 8th iteration.
Results in Bun:
> bun bun/main.ts
1050456064
2005811200
2962059264
3915100160
4867907584
5819809792
6765035520
^C
Additional information
- Adding
smol
to eitherbun run
ornew Worker()
has no effect. - I fixed this by calling
Bun.gc(true)
immediately afterself.postMessage()
in a personal project, but couldn't reproduce the results here. - I don't have enough knowledge of swap memory to know if this is unexpected, but after crashing, the swap usage often grows.
I think I've run into similar issue when using Bun's Worker
's. This error message is being thrown and Bun crashes when memory leaks enough:
FATAL ERROR: JavaScript garbage collection failed because thread_get_state returned an error (268435459). This is probably the result of running inside Rosetta, which is not supported.
/Users/jarred/actions-runner/_work/WebKit/WebKit/Source/WTF/wtf/posix/ThreadingPOSIX.cpp(497) : size_t WTF::Thread::getRegisters(const WTF::ThreadSuspendLocker &, WTF::PlatformRegisters &)
Thanks for reporting, this is a known issue that we will be fixing.
FATAL ERROR: JavaScript garbage collection failed because thread_get_state returned an error (268435459). This is probably the result of running inside Rosetta, which is not supported. /Users/jarred/actions-runner/_work/WebKit/WebKit/Source/WTF/wtf/posix/ThreadingPOSIX.cpp(497) : size_t WTF::Thread::getRegisters(const WTF::ThreadSuspendLocker &, WTF::PlatformRegisters &)
Interesting! In my case, Bun never returned an error before crashing.
Any follow-up on when this might be fixed? This is preventing me from using Workers in bun.
I just re-ran the reproduction code and the memory leak seems to have been mostly fixed somewhere between v1.0.11 and v1.0.12. Memory usage used to double every iteration, but now (in my testing) memory usage went from 1013653504
to 1034555392
after 1000 iterations. That averages out to ~2000 per iteration, as compared to ~1000000000 per iteration when I originally reported.
Should this issue be closed (the massive memory leak is fixed), or kept open (there still seems to be a tiny leak left over)?
It's still valid for me for the most recent released version (1.0.33) even for canary :( The memory is growing on each worker passage, it's not free at all even if the worker has been terminated.
I was running into issues possibly caused by https://github.com/oven-sh/bun/issues/5659 so I tried to use workers as a workaround, hoping that when the worker gets terminated, the memory might be freed. Unfortunately, this doesn't seem to be the case.
Currently a major issue impacting us using Bun in production.
Collab with Deno maybe? https://github.com/denoland/deno/issues/18414
Please fix this! The memory just keeps increasing until server runs out of memory crashes.