nginx config fastcgi_request_buffering breaks streaming / chunked transfer encoding
The admin manual suggests to use fastcgi_request_buffering off for nginx configurations. However, this breaks in situations where the Content-Length header is not set (e.g. streaming upload on WebDAV, like DAVx⁵ does).
Note that PHP does not plan to change php-fpm's behavior, as the (ancient, unmaintained) FastCGI spec requires the Content-Length header: https://bugs.php.net/bug.php?id=51191
nginx does not care, because it works with enabled buffering: https://trac.nginx.org/nginx/ticket/2287
Symptom analysis here (TL;DR: bigger uploads using DAVx⁵ will always fail): https://github.com/seedvault-app/seedvault/issues/505
My suggestion is to enable fastcgi_request_buffering to get this use case to work, but I don't know if anything else depends on disabled buffering.
Efforts to remove php-fpms limitation have failed, see https://github.com/php/php-src/pull/7509
I'm not certain of the right path either. 😂
With a lot of these large file transfer matters the most honest answer comes down to: "It depends." :-)
My understanding is that having this particular parameter on means uploads being more likely to time out (under some circumstances) and increased disk i/o and space usage (to store the client request bodies).
I don't believe off to be a requirement for NC per se (and we don't document it as such anywhere I'm aware of), but like with most things involving large file transfers... Your mileage may vary (i.e. depends on your environment, use cases, and priorities).
My hot take is that if we change it in the example config, then the only thing we're gaining is a different group of people requesting we toggle it the other way in six months. :-)
Another option, if neither on nor off is ideal for someone, is to use Apache + php-fpm or mod_php (at least for the app server; this isn't relevant to using NGINX solely as an RP). Which is the recommended deployment anyhow.
There's always room for better documenting this stuff though. So maybe the answer is: let's try to document the trade-offs of this particular switch today and fight the other battles tomorrow. :-)
Perhaps we can start by figuring out how to make the situation you ran across recognizable by a reader. In that way they can then discover this option, and tweak it in their environment to see if it makes any difference for their use case.
How would you describe the symptoms you encountered - before changing the value of this parameter - in a couple sentences?
I agree that it doesn't make sense to change the example if the implication isn't clearly understood.
I'll try to get a minimal reproducing setup to share and clarify the symptoms - this may take some time though. Since I fixed it for my setup, my priorities are currently elsewhere.
I ran some tests on a local docker deployment:
docker run --rm --name nc --network my-net -v nextcloud_html:/var/www/html nextcloud:fpm
docker run --rm --network my-net -p 80:80 -v /tmp/nginx.conf:/etc/nginx/nginx.conf:ro -v nextcloud_html:/var/www/nextcloud nginx
The nginx.conf file is roughly what was proposed in the admin guide (but I can't verify it right now, because it's down).
I ran DAV file uploads with curl:
$ dd if=/dev/urandom of=testdata.bin count=1 bs=100 # create a small file with random data
$ curl --verbose --user admin -T testdata.bin http://127.0.0.1/remote.php/dav/files/admin/testdata # works
$ curl --verbose --user admin -T - http://127.0.0.1/remote.php/dav/files/admin/testdata < testdata.bin # doesn't work
Both HTTP requests will be answered with 204 No Content, but the first one (which does not use chunked transfer encoding but instead has content-length set) results in the full file uploaded, while the second one (using chunked transfer encoding, no content-length) results in a 0-byte file.
I expect the same behavior on any Android application using pipes, for example DAVx5, when using the Android standard ContentProvider file upload interface.
I think the exact behavior that a user sees will depend various factors (such as buffering of the used http client).
Just to state the obvious: by setting fastcgi_request_buffering on, nginx will buffer up to client_body_max_size memory for every upload, even the ones with known size, so this change will definitely have an impact on nginx performance.
Thank both of you for providing the historical context of the issue.
I'm here because a severe bug:
- https://github.com/saber-notes/saber/issues/945
- https://github.com/nextcloud/neon/pull/1259
It is not about performance, fastcgi_request_buffering off can cause certain clients to write 0 size files siliently, and as a result users can lose data during synchronization.
I have no idea what's the best solution. But I do believe this should be escalated and addressed, either in the document or by working it around in Nextcloud's codebase. Maybe just ask users stop using NGINX.
Related recent change made in server: nextcloud/server#43047