isolated memory leak with mongodb nodejs module
What version of Bun is running?
1.3.1+89fa0f343
What platform is your computer?
Linux 6.14.0-1017-gcp aarch64 aarch64
What steps can reproduce the bug?
Further information to issue #12117 we have tried to isolate why there is a leak (RSS it seems) with the node-mongodb-native module. We have set up a test environment with both Bun 1.3.1 and nodejs 24.10.0. We have tested with an earlier version of the mongodb module (6.15.0) as well as the latest (6.20.0). The mongo setup is a three node replicaset, although we see the same leaks in production against a sharded cluster. The test script either simply runs a 1 minute scheduler (timeout) and does nothing, or in addition connects to mongo. We run this script in parallel on Bun and nodejs connecting to the same database. We are not sending any queries or db commands, simply connecting.
We are passing the following mongodb connection options to both Bun and nodejs:
const client = await MongoClient.connect(mongodb://dbuser:[email protected]:27017/testdb', {
readPreference: ReadPreference.NEAREST,
maxPoolSize: 100,
minPoolSize: 0,
maxIdleTimeMS: 0
});
client.connect();
We have also tried another script, that simply connects, then on a 5 minute cadence disconnects, waits a second, then reconnects. This is not practical in our applications, but wanted to see if indeed the RSS ever went down. It did not, and grew by 2-3MB each reconnection.
What is the expected behavior?
The memory consumption of the simple application should not continually rise. Inspection of the heap shows that Bun is performing some garbage collection (although there is a minor creep in heap size), but we are seeing the RSS increase at approximately 8-12MB per hour per application with nothing more than an open connection to the mongo replicaset.
What do you see instead?
RSS memory runaway.
mins | bun, idle | bun, connect | nodejs, idle | nodejs, connect
0 | 67088 | 82556 | 75224 | 84388
15 | 67216 | 82984 | 75352 | 76912
30 | 67216 | 86220 | 75352 | 77192
60 | 67332 | 91596 | 75352 | 78408
90 | 67332 | 95868 | 75352 | 78460
Additional information
Closing then re-connecting does not lower the RSS and indeed causes a step function in usage. The only reliable way to control the memory leak is an application restart, which is certainly not a desired solution.
There appears to be a much less of a problem with nodejs/mongo (about 500k/hour), but we are going to raise this on the mongo Jira none the less
Found 1 possible duplicate issue:
- https://github.com/oven-sh/bun/issues/12117
This issue will be automatically closed as a duplicate in 3 days.
- If your issue is a duplicate, please close it and 👍 the existing issue instead
- To prevent auto-closure, add a comment or 👎 this comment
🤖 Generated with Claude Code
#12117 is indeed related but this provides isolated detail rather than the full applications people are running.
Hi @nektro @Jarred-Sumner , just trying to see if there's any permutations that assist in solving this problem. We have tried most versions of the nodejs-mongodb-native module from 6.10.0 through 6.20.0 with no material change. We have also tried forcing Bun.gc on an interval to see if it helps. On either a 5 minute or 1 minute cadence there is little or no discernible difference and the RSS keeps climbing at the same rate. Happy to help out testing where possible to expedite this. We have a set of mongo clusters we can test this on if that would be of assistance, and can provide heap snapshots as well if needed. Appreciate all the effort to resolve this.
We have continued our testing with this simple script, that basically connects to the db using the latest nodejs-mongodb-native module (v6.20.0), both with Bun 1.3.1 and Nodejs 24.10.0.
// modules and classes
const mongodb = require("mongodb");
const { createWriteStream } = require("node:fs");
const { setInterval } = require("node:timers/promises");
const MongoClient = require("mongodb").MongoClient;
const ReadPreference = require("mongodb").ReadPreference;
let dbc = null;
let dbh = null;
async function init() {
const memoryUsageStream = createWriteStream("bunmemory.json");
dbc = await MongoClient.connect('mongodb://user:[email protected]:27017/database', {
appName: 'bunleak',
readPreference: ReadPreference.NEAREST,
maxPoolSize: 10,
minPoolSize: 0,
maxIdleTimeMS: 10000,
});
dbh = client.db('database');
console.log("connected.");
for await (const _ of setInterval(
60_000 * 15, // 15 minutes
)) {
const payload = JSON.stringify(process.memoryUsage());
memoryUsageStream.write(`${payload}\n`);
}
}
init();
We are running this against a three node mongo replicaset. Running two copies (the second writing to nodememory.json instead) we see vast differences in rss from nodejs:
{"rss":77299712,"heapTotal":15400960,"heapUsed":13671568,"external":20597694,"arrayBuffers":18331191}
{"rss":77328384,"heapTotal":15925248,"heapUsed":14676248,"external":20671622,"arrayBuffers":18405079}
{"rss":77590528,"heapTotal":16187392,"heapUsed":14274248,"external":20705910,"arrayBuffers":18439367}
{"rss":77144064,"heapTotal":16187392,"heapUsed":14295272,"external":20746018,"arrayBuffers":18479475}
{"rss":77144064,"heapTotal":16187392,"heapUsed":14692072,"external":20767418,"arrayBuffers":18500875}
{"rss":77144064,"heapTotal":16187392,"heapUsed":14228640,"external":20745190,"arrayBuffers":18478647}
{"rss":77230080,"heapTotal":16187392,"heapUsed":14579456,"external":20806698,"arrayBuffers":18540155}
and Bun:
{"rss":79122432,"heapTotal":7332864,"heapUsed":27669828,"external":20938852,"arrayBuffers":18474408}
{"rss":84283392,"heapTotal":9099264,"heapUsed":29675934,"external":21362494,"arrayBuffers":18797036}
{"rss":85565440,"heapTotal":7696384,"heapUsed":28193741,"external":21112077,"arrayBuffers":18549824}
{"rss":98959360,"heapTotal":7062528,"heapUsed":27432949,"external":21014117,"arrayBuffers":18410620}
{"rss":90341376,"heapTotal":7284736,"heapUsed":27733885,"external":21056141,"arrayBuffers":18467420}
{"rss":92925952,"heapTotal":7581696,"heapUsed":28006928,"external":21108928,"arrayBuffers":18516568}
{"rss":94744576,"heapTotal":7434240,"heapUsed":27866932,"external":21081652,"arrayBuffers":18491012}
There is no queries being done, just connection. Mongo recommends setting maxIdleTimeMS and then minPoolSize to 0 for low use applications, which has the connections cleared down in the pool. This does not appear to be happening with Bun. Any help very much appreciated. The two result sets above were run on the same instance, against the same db, at the same time (so coincident in every way).
In case anyone else is struggling with this, we think we know where this is originating from. If you are connected to more than a single, local db (i.e. replicaset, sharded cluster) the mongo client opens up 2 connections to each instance for maintenance, which is where it finds the topology information. We believe here is where the original problem starts, and something in the latest client (6.20.0) leaks. This leak appears worse on bun than nodejs, but it is a leak none the less. We have raised this with mongo already. isolated memory leak with basic connection It also appears to have problems with the connection pool. So the only combination we have found to not leak like a sieve is:
Bun 1.2.2 nodejs-mongodb-native 6.17.0
The client connection pool defaults to 100 maximum connections, so if you have a large cluster this is a lot of connections to potentially multiple db instances that are being managed. Obviously if you have a very high query rate then the connection pool needs to be substantial for performance, but what we have found is you must set minPoolSize to 0, and maxIdleTimeMS to something like 10000 (10s). This way if the connections become idle then the client will clear those down. In our case, we have also had to run small numbers on the maxPoolSize (between 5 and 10). This limits possible concurrency and we have yet to see how it performs over a reasonably long period of time with high activity, but at least with this combination of versions and configuration we see all our Bun apps hovering around 95-100M RSS and not running away. We are still going to keep trying to fix this, but this at least may help someone else struggling with mongo
Can confirm issue still persists on bun 1.3.2 at least. Thank you for reporting it in with Mongo and for your continued investigation into it!
I tried your suggested settings, but no avail. On Bun 1.3.2, MongoDB driver 7.0.0, memory steadily climbs to 100% within a day.
so we have yet to try mongo 7.0.0, but running against a global, 31 node sharded cluster the following client options keep the memory leak minimised (still present, but slow):
- maxIdleTimeMS: 10000
- minPoolSize: 0
- maxPoolSize: 10
We originally built with Bun v1.2.2, but have recently moved to Bun v1.3.2, using nodejs-mongodb-native 6.17.0. It is the above config that has the impact for us, not the Bun version (at least with the latest release).
Can confirm, when running bun v1.3.2 with mongoose v8.19.3 the issue is persistent, even with the connection configuration.
how to replicate
Dockerfile
FROM oven/bun:1.3.2-alpine AS build
WORKDIR /usr/src/app
# Copy manifest and install dependencies (cached unless they change)
COPY package.json bun.lock tsconfig.json ./
RUN bun install --production --frozen-lockfile
# Copy application source
COPY src ./src
# Build once to type-check and emit JavaScript artifacts
RUN bun run build
FROM alpine:3.22.2
RUN apk add --no-cache libstdc++
WORKDIR /usr/src/app
CMD ["./app"]
# Copy the built application from the build stage
COPY --from=build /usr/src/app/app .
package.json
{
"name": "app",
"version": "1.0.50",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"dev": "bun run --watch src/index.ts",
"build": "bun build src/index.ts --compile --production --minify --target=bun-linux-arm64-musl --sourcemap --outfile app",
"docker-build": "docker buildx build -t bun-runaway-test:local-build --load .",
"docker-run": "docker run -d -p 3000:3000 --platform linux/arm64 --memory-reservation=300m --memory=300m --network internal-network --name bun-runaway-test --env-file .env.docker docker.io/library/bun-runaway-test:local-build"
},
"dependencies": {
"elysia": "^1.4.16",
"mongoose": "^8.19.3"
},
"devDependencies": {
"bun-types": "^1.3.2"
},
"module": "src/index.js"
}
/src/index.ts
import { Elysia, t } from 'elysia';
import mongoose from 'mongoose';
const connectDB = async (): Promise<void> => {
const connectionStr: string = `${process.env.MONGO_TYPE}://${process.env.MONGO_USERNAME}:${process.env.MONGO_PASS}@${process.env.MONGO_URI}/${process.env.MONGO_DB_NAME}?${process.env.MONGO_PARAMS}`;
mongoose.set('strictQuery', false);
try {
await mongoose.connect(connectionStr, {
// Connection pool settings to prevent memory leaks
maxPoolSize: 10, // Maximum number of connections in the pool
minPoolSize: 0, // Minimum number of connections in the pool
socketTimeoutMS: 45000, // Close sockets after 45 seconds of inactivity
serverSelectionTimeoutMS: 5000, // Timeout for server selection
family: 4, // Use IPv4, skip trying IPv6
});
console.log(`${process.env.MONGO_DB_NAME} connected successfully`);
} catch (error) {
console.log('MongoDB connection error:', error);
throw error;
}
};
// Connect to MongoDB
connectDB().catch((error) => {
console.error('Failed to connect to the database:', error);
process.exit(1); // Exit the process with failure
});
const app = new Elysia().get('/', () => 'Hello Elysia').listen(3000);
console.log(`🦊 Elysia is running at ${app.server?.hostname}:${app.server?.port}`);
Execute bun install , bun run docker-build then bun run docker-run, after that just run docker stats and observe as memory climbs from 50MB to 55MB in about two minutes, and continues to climb with 0 requests being sent to the server.
Note
- Running the server without the
connectDBresults in a steady, flat, consistent memory usage. - The issue is confirmed with MongoDB atlas, as well as with a 3 replica MongoDB Cluster running in Kubernetes.
- The same version of
mongooserunning in NodeJs with esbuild and postject (to replicate the same "Single Executable Binary") does NOT experience this memory leak at all.
Edit
Turns out I missed a setting maxIdleTimeMS: 10000, which in fact does resolve the issue (at least so far), but still the baseline issue remains overall.
@sagi-chronom I was just about to say, you missed that setting. It is the combination of maxIdleTimeMS and minPoolSize:0 that slows the leak
Is this behavior caused by Bun, the MongoDB driver implementation, or something that only occurs when the two interact?
So the mongodb module leaks. Less under nodejs when compared to bun but still leaks. We have not tested the new v7 driver as yet ourselves. If you don't set minPoolSize:0 and maxIdleTimeMS (something like 5000 or 10000) it can leak 100's MB per day
@rgillan Unfortunately, the issue still persist even with all the settings, including the maxIdleTimeMS, it just takes a lot longer to manifest, but there is still a steady, irritating, not Heap related leak (I confirmed by running Bun.gc(true); and it had 0 affect on the RSS).
As the Grafana screenshot shows, there is a steady, slow incline in memory. Note that the last request that is not a health check request was made at 2025-11-13T18:50:31.418Z UTC, it is currently Sun Nov 16 10:15:00 UTC 2025
I think it is also worth mentioning, that the CPU also spiked for some reason, I am still checking on my end to see if it related in any way or not, cause the timing does not correlate, but still.
I think it is also worth mentioning, that the CPU also spiked for some reason, I am still checking on my end to see if it related in any way or not, cause the timing does not correlate, but still.
![]()
I can confirm the CPU spike in multiple bun applications with mongodb. in my case it stays at 100% until restarted.
I think it is also worth mentioning, that the CPU also spiked for some reason, I am still checking on my end to see if it related in any way or not, cause the timing does not correlate, but still.
I can confirm the CPU spike in multiple bun applications with mongodb. in my case it stays at 100% until restarted.
Same here, upon investigating our stats it is present in all bun migrated code bases, I can't seem to find what the trigger is to the initial spike, but it does occur in all Bun + Mongo code bases. I am on the verge of migrating back from Bun to NodeJS, which really sucks cause the performance increase we saw outside this issue is significant, but does not justify the issue.
I think this is something that should be mentioned in the Bun manual about their integration with mongo https://bun.com/docs/guides/ecosystem/mongoose, at least until we have a resolution path to this bug, because right now, the docs make it look like the integration with Mongo is really tight.
So please don't assume that the mongo module itself is not leaking. We have already raised a ticket with them on this, and whilst the leak is worse when using Bun, it's certainly not non-existent with nodejs/mongodb combo.
Suggest you raise a separate ticket to track cpu, as this one is focused on an isolated memory leak with mongo. We are not seeing the cpu spike (linux, arm64) across any of the 15 applications we have migrated to bun.
So please don't assume that the mongo module itself is not leaking. We have already raised a ticket with them on this, and whilst the leak is worse when using Bun, it's certainly not non-existent with nodejs/mongodb combo.
Suggest you raise a separate ticket to track cpu, as this one is focused on an isolated memory leak with mongo. We are not seeing the cpu spike (linux, arm64) across any of the 15 applications we have migrated to bun.
Do you use the --compile flag in your 15 applications? I am narrowing down the variables on our use case and wanted to know if you are having the same setup we do (This is my main suspect so far, but nothing definitive yet).
Yes we do produce single file executables:
bun build "source.ts" --compile --minify --sourcemap --outfile "build/target"
We do have other applications running elsewhere where bun is running the typescript directly (and has mongo connection), and it still has OOM issues which were radically changed when we tightened the connection pool settings
We've tested with NODE25 and MongoDB driver 7.0.0, and have gotten much better results.
Red line covers our Nest.js app with Bun 1.3.2 with mongodb 7.0.0, blue is with Nodejs 25, with following mongo db options.
const MONGO_OPTS = {
minPoolSize: 0,
maxIdleTimeMS: 10000,
maxPoolSize: 10,
} satisfies MongoClientOptions;
RSS still climbs, but it's like 11 mb over the course of a day. If we go from crashing daily to once a week or once a month (or never, since the service is auto-restarted by Cloud Run at regular intervals), then this is preferred.
Memory logs
We're switching our critical and heavy-duty services back to node.
Update
Bun v1.3.3 was just released; we haven't tested this issue on that version yet.
In the meantime, we found a workaround that works well for us and saved us from migrating the codebase from Bun back to npm + Node.js.
With only minimal changes to our Bun codebase (for example, replacing Bun.gzip with Node's zlib.gzip), we can compile it targeting Node.js and then run the result using the regular Node.js runtime. In our tests over the last few days, memory usage now shows almost no growth over a period of about 4 days.
To build for the Node.js target we use:
bun build src/app.ts --minify --production --target=node --outfile app.js
Grafana Screenshot of the revised pod
Revised Dockerfile
FROM oven/bun:1.3.2-alpine AS build
WORKDIR /usr/src/app
# Copy manifest and install dependencies (cached unless they change)
COPY package.json bun.lock tsconfig.json ./
RUN bun install --production --frozen-lockfile
# Copy application source
COPY src ./src
# Build for Node.js target
RUN bun run build-node
FROM node:24.11.1-alpine3.22
WORKDIR /usr/src/app
CMD ["node", "app.js"]
# Copy the built application from the build stage
COPY --from=build /usr/src/app/app.js .
i have bun 1.3.1 and mongo driver 7.0
minPoolSize: 0, maxPoolSize: 50, socketTimeoutMS: 30_000, maxIdleTimeMS: 10_000,
with no growth and the app is managing to release memory, im still testing it but it really seems to work for me.
i have bun 1.3.1 and mongo driver 7.0
minPoolSize: 0, maxPoolSize: 50, socketTimeoutMS: 30_000, maxIdleTimeMS: 10_000,
with no growth and the app is managing to release memory, im still testing it but it really seems to work for me.
does this still work for you? i tried and it still grows 10mb per hour :(