mod_zip icon indicating copy to clipboard operation
mod_zip copied to clipboard

Excessive memory usage (version 1.3.0, nginx 1.22.1)

Open dup2 opened this issue 2 years ago • 5 comments

We use nginx 1.22.1 with version 1.3.0 and see excessive memory usage.

We create zip files with local files only based on a manifest without CRC checksums.

For a small number of files (i.e. 3 files, total around 11GB) it is no problem, memory usage stays almost the same

For a larger number of files (i.e. 200 files, total around 2 GB) the memory usage spikes to several hundred MB and even some GB sometimes. It seems the memory usage is related to the number of files.

We are aware of #67 but this does not seem to fix the issue.

nginx information

nginx version: nginx/1.22.1
built by gcc 11.2.0 (Ubuntu 11.2.0-19ubuntu1) 
built with OpenSSL 3.0.2 15 Mar 2022
TLS SNI support enabled

This is running on Ubuntu 22.04

dup2 avatar Jul 12 '23 07:07 dup2

Any feedback on this? Or can someone explain me how to debug this?

dup2 avatar Jul 27 '23 15:07 dup2

More tests reveal a more accurate picture here (using top to check memory of the nginx worker, looking at VIRT and RES)

File Information Total Size VIRT RES COMMENT
75 x 3.8 MB 285 MB 565 MB 288 MB RES is about the total size
75 x 7.5 MB 562 MB 846 MB 571 MB RES is about the total size
75 x 75 MB 5.6 GB 318 MB .. 508 MB .. 641 MB .. 812 MB 65 MB .. 233 MB .. 365 MB .. 537 MB Initial .. after 15% .. after 25% .. after 35%
5 x 750 MB 3.7 GB 299 MB .. 321 MB .. 343 MB 26 MB .. 49 MB .. 71 MB 1st file .. 2nd file .. 3rd file

So it seams for smaller files, the RSS memory usage is matching the total size, for medium and large files around 20MB are used in the RSS per file (increased memory usage with streaming progress).

This means that for a few large files, it works with reasonable memory usage. But for a large amount of files, it does not scale as the memory usage is per file.

dup2 avatar Jul 28 '23 08:07 dup2

What's the behavior before the patch made in #67? Is worse after it?

nahuel avatar Jul 30 '23 00:07 nahuel

We see no changes in our tests when comparing 1.2 and 1.3. As it seems to be an issue depending on the number of files, we suspect something with subrequests using up memory - for example that the result of the subrequest is stored somehow in a memory buffer which seems to be capped at 20MB or so.

The mentioned #67 seems to affect use cases where you have a huge amount of files only.

dup2 avatar Jul 30 '23 07:07 dup2

The same issue we have. We created files 10000 1Mb files:

for i in {1..10000}; do dd if=/dev/random of=$i.dat bs=1M count=1; echo $i; done

256Mb of RAM is not enough to download, memory increases all the time until OOMKill.

herclogon avatar Jun 25 '24 16:06 herclogon