node-csv
node-csv copied to clipboard
Memory usage issue with stream-transform
Describe the bug
When using stream-transform
for processing large datasets and the parallel
option is a value greater than 1
, we're seeing high memory usage.
To Reproduce
const fs = require('fs')
const memwatch = require('@airbnb/node-memwatch')
const { pipeline } = require('stream/promises')
const { transform } = require('stream-transform')
let maxUsedHeap = 0
async function main() {
memwatch.on('stats', (stats) => {
maxUsedHeap = Math.max(maxUsedHeap, stats.used_heap_size)
})
await pipeline(
function* () {
let i = -1
const n = 9999999
while (++i < n) {
yield { i }
}
},
transform({ parallel: +process.env.PARALLEL }, (chunk, next) =>
next(null, chunk.i)
),
fs.createWriteStream('/tmp/output')
)
console.log(`${maxUsedHeap / (1000 * 1000)}mb`)
}
main()
// $ PARALLEL=1 node example.js
// 6.009856mb
// $ PARALLEL=2 node example.js
// 320.684144mb
Additional context
- Our theory is that this is backpressure-related. We noticed that
this.push
was returningfalse
to indicate that the stream should pause reading, yet stream-transform asks for more input over here regardless. - To support that theory, this seems to resolve the issue, though it isn't a proper solution: https://gist.github.com/477d30dfeb443be9a92ac8a3aedc238f
- Thank you for the CSV project :) its helping us a ton here at snaplet.dev
Did you resolve this? I'm seeing similar issues.
I didn't have the time to look at the issue yet.
I have on my fork a minimum working example of this bug. It looks like the transform is breaking Node's built-in backpressure logic by ignoring the return value from push
. This causes the transform buffer to reach unbounded length in any case where the source is faster than the sink. More info.
To see this in action, check out the branch linked above, change directory to packages/stream-transform
, and run npm test
. You will get an out of memory error within a few minutes. But if you comment out the do-nothing transform in samples/backpressure.js
, the test runs for an hour with no issue.
I reproduce your code sample. With the latest source code, the memory usage stays between 20MB and 30MB on a 30GB generated CSV file. Maybe some change since your report fixed the issue. Do you confirm?
Still not working on my repro:
import {transform} from 'stream-transform';
import {pipeline, Readable, Writable} from "stream";
class DummyData extends Readable {
constructor() {
super();
this.numReads = 0;
}
_read() {
// Push incrementing values forever
this.push(JSON.stringify({'string': 'read_' + this.numReads}));
this.numReads++;
}
}
class Stopper extends Writable {
constructor() {
super({
objectMode: true,
highWaterMark: 1, // Allow just one item in buffer; apply backpressure otherwise
});
}
// Accept chunks extremely slowly; discard the chunk data
_write(chunk, encoding, callback) {
console.log('wrote one out');
setTimeout(callback, 1000);
}
}
pipeline(
new DummyData(),
transform(data => data), // Comment out this line, the test runs forever. Leave it in, run out of memory pretty quick.
new Stopper(),
() => { },
);
@dmurvihill Could you have a look at the latest release of stream-transform version 3.2.10. It now takes the return value from push
into account to throttle the execution. Based on your feedback, I created a new script to replicate the issue.
Hey, thanks for being so attentive to this issue! It looks like that does respect backpressure and pause the stream, but how does it get unpaused?
I didn't reproduce the pausing behavior. I will need to dig more into it. Any change you could reproduce the pausing in my sample ?