jeromew
jeromew
Hello thanks for the report. I need to spend some time on this. Did you try to optimize the calls through the _writev mechanism ? From my vague understanding, this...
just to keep as a reference - https://youknowfordevs.com/2018/10/29/using__writev-to-create-a-fast-writable-stream-for-elasticsearch.html
hum so this could be the explanation - there is no backpressure from the socket (throughput could be increased) so the writev mechanism is never called - and as a...
OK I have to grasp the 2 scenarios you mention in you benchmark. the initial benchmark (wrapping seq in a stream) was supposed to mimic a real world scenario using...
I looked a bit into the benchmarks. I agree that the old benchmarks are not correct for the use case where a user produces chunks row-by-row. As you noted, this...
Other options for the aggregator - https://www.npmjs.com/package/block-stream2 - https://www.npmjs.com/package/rebuffer - https://www.npmjs.com/package/stream-chunkify
pipe into psql COPY x 6.16 ops/sec ±15.45% (37 runs sampled) pipe into pg-copy-stream COPY x 1.22 ops/sec ±5.62% (11 runs sampled) pipe into pg-copy-stream COPY (batched version) x 37.35...
I did some more tests on this. It looks like I lowered the number of generated lines and that it gave wrong results on the benchmarks (only 9999 lines were...
I agree that there are advantages to have it incorporated into copy-from. It is not so easy for new users of the module to know if and when adding the...
for future reference: In case we need an integrated implementation that exactly respect the chunk size, the `stream-chunkify` module seems to have an implementation that has a good performance (nearly...