"Chunk was larger than configured CallBuilder::chunked_max_chunk" error even in v0.9.4
On some websites, e.g. http://lastgreatliar.com, mio_httpc v0.9.4 fails with the following error:
Error: Chunk was larger than configured CallBuilder::chunked_max_chunk. 262144
Firefox, curl and ureq work fine.
There's 13628 such websites in the top million (I'm using Tranco list generated on the 3rd of February).
Archive with all occurrences: mio_httpc-0.9.4-cannot-chuck-the-chunk.tar.gz
Code used for testing: https://github.com/Shnatsel/rust-http-clients-smoke-test/blob/8e3285a45e1d657744a2697ced1bd8461031fb86/mio_httpc-smoke-test/src/main.rs
This makes me wonder, why is the maximum chunk size limited in the first place (as opposed to limiting the size of the entire response)? Is the entire chunk retained in memory?
It is. Usually the responses are combined with compression so you have to keep it in memory. I'm going to mark this as won't fix. And add a section to readme.
I believe the range by which a gzip stream can reference itself is bounded - e.g. RLE decoding can reference ranges no more than u16::MAX bytes long. So at least for gzip this should be fixable. I'm not sure about brotli, but that's not supported right now regardless.
Notably reqwest doesn't have this issue, perhaps you could use its implementation as a reference?