open-fixture-library icon indicating copy to clipboard operation
open-fixture-library copied to clipboard

Allow large file uploads in Nginx

Open DaAwesomeP opened this issue 3 months ago • 5 comments

Fixes #3837

DaAwesomeP avatar Mar 19 '24 22:03 DaAwesomeP

Hi sorry, I will rebase.

I'm now getting 504 Gateway Time-out from Nginx. I think we also need to increase proxy_read_timeout for the exact upload URL (probably don't want this on all URLs).

DaAwesomeP avatar Apr 02 '24 20:04 DaAwesomeP

@FloEdelmann OK, I pushed another change to specifically allow the larger client body size (as before) and also a longer read timeout (new, to prevent error 504 with large uploads. I set these specifically for the upload URL since they probably aren't desirable anywhere else.

Nested location blocks do not inherit/merge statements in Nginx, so I opted for two separate blocks. If we need more such blocks in the future, we can switch to a separate include for the proxy pass to avoid redundant statements.

DaAwesomeP avatar Apr 02 '24 20:04 DaAwesomeP

Nice, I updated the server with this configuration. Could you please check again?

But 600 seconds = 10 minutes is a bit much, I'd prefer a 2 minute timeout. Anything longer than that is probably too expensive for the server anyway, and I don't want to overload it too much for GDTF imports that most often require much manual cleanup afterwards anyway...

FloEdelmann avatar Apr 02 '24 21:04 FloEdelmann

Uploading a 6.3MB file is taking a really long time and still fails with a 504 error. I'm on good university internet as I am trying this.

I noticed the JSON output was being written to the console, so I copied it and saved it to a file. The GDTF is becoming bigger when base64-encoded. The 6.3MB file becomes 8.4MB, but that's still not that big.

I think the issue might be the underlying app, not Nginx. Possibly the app does not like 8.4MB of JSON? Maybe there is another way of doing this using a normal file upload endpoint/form field and converting it to JSON on the backend?

DaAwesomeP avatar Apr 02 '24 21:04 DaAwesomeP

Yes, bigger file uploads are usually done using multipart form encoding rather than base-64 text encoding. But I don't think it is worth reworking that whole upload part just to allow larger file uploads. The zip decompression and file parsing that happens after the upload (currently all in memory, no streaming) also has to be able to handle those larger files.

And as I said, most larger files usually can't be imported well enough to have only little manual cleanup left, so the bigger the uploads, the bigger the manual cleanup afterwards. So I currently tend towards closing this as won't fix if it's more effort than just "increase the upload limit in Nginx".

For your specific case: You can also run the import script locally if you have the development environment set up, and then manually submit a pull request with the new file.

FloEdelmann avatar Apr 02 '24 21:04 FloEdelmann