aiohttp
aiohttp copied to clipboard
Pass max_length parameter to ZLibDecompressor
Is your feature request related to a problem?
Steps to reproduce:
- Create file:
dd if=/dev/zero bs=1M count=1024 | gzip > 1G.gz - Start nginx on localhost with location:
location /1G.gz {
try_files $uri /tmp/1G.gz;
add_header Content-Encoding gzip;
}
- Try to download file:
async with aiohttp.ClientSession(headers=headers) as session:
async with session.get('http://localhost/1G.gz') as response:
async for content in response.content.iter_any():
print(len(content)
- See memory usage. It'll so big... I run my service in docker and have a memory limit for container. I get OOMKilled.
Describe the solution you'd like
Will be great an option to pass max_length to ZLibDecompressor (https://github.com/aio-libs/aiohttp/blob/master/aiohttp/compression_utils.py#L104)
Describe alternatives you've considered
Change default max_length from 0 to, 4Kb or greater.
Related component
Client
Additional context
No response
Code of Conduct
- [X] I agree to follow the aio-libs Code of Conduct
At a glance, if I've understood that correctly, changing the parameter would just stop you from receiving the full response (as that method is only going to be called when new data is received). Isn't the problem more that it shouldn't be calling that method with too much data? As you are using the streaming API, I'd expect it to only receive a few KBs of data each time and call that decompress method with it each time. So, if you see high memory usage, I wonder if the issue is actually with the streaming API instead...
@mykola-mokhnach might have some ideas as they last worked on the compression code.
At a glance, if I've understood that correctly, changing the parameter would just stop you from receiving the full response (as that method is only going to be called when new data is received). Isn't the problem more that it shouldn't be calling that method with too much data? As you are using the streaming API, I'd expect it to only receive a few KBs of data each time and call that decompress method with it each time. So, if you see high memory usage, I wonder if the issue is actually with the streaming API instead...
@mykola-mokhnach might have some ideas as they last worked on the compression code.
For example 1Kb after ungzip will be about 1Mb or bigger. Because the buffer is for incoming from the socket, not after ungzip.
Right, good point. So, if it's data with a very high compression ratio it could become a problem. I suspect this would still need some refactoring in order to get it to return the full data.
i.e. Instead of something like:
for chunk in data_received:
yield decompress(chunk)
we'd need something more along the lines of:
for chunk in data_received:
while chunk:
result = decompress(chunk)
yield result.data
chunk = result.unconsumed_tail
Feel free to try and create a PR. I'd assume this would be fine changing the default behaviour, as the purpose of the streaming API is to limit memory usage, so this looks like an oversight to me.
Ideally, this would need to be implemented in the brotli decompressor too.
@Dreamsorcerer thank you! I'll work on it. I created issue in brotli. it doesn't have the option.