Results 199 comments of Khafra

This is unfortunately an issue with using webstreams vs. node streams. Node streams perform much better than webstream currently.

If we don't decompress the response, it takes 4.4 seconds. If we do (using zlib.createBrotliDecompress()) it takes ~70 seconds.

... and if we re-assign `this.body` to the result of `pipeline`, it fixes the issue, but causes issues with invalid gzipped/brotli compressed bodies. @ronag you know much more about streams...

So for example, this diff fixes the issue: ```diff diff --git a/lib/fetch/index.js b/lib/fetch/index.js index 0b2e3394..14e84b29 100644 --- a/lib/fetch/index.js +++ b/lib/fetch/index.js @@ -2023,7 +2023,7 @@ async function httpNetworkFetch ( status, statusText,...

yeah it does, but what I don't understand is why it's causing an issue here, but not with node-fetch. Node-fetch uses pipeline & zlib too.

I thought so too (made an issue in the performance repo), but considering that removing the decompression fixes the issue...?

> web streams have smaller chunks No, I don't think so? In the OP node-fetch has 10k more chunks than undici.fetch does. Adjusting the highwatermark/size didn't make much difference if...

@cakedan do you have a repro that runs locally, without the external server? I can't seem to replicate the issue

Can you add a test? These shouldn't be exported in node 14.

there are ci failures caused by a node-fetch test (`npm run test:node-fetch`)