[Enhancement] Support for very large uploads by chunking
Explain in a few words which functionality or improvements you would like to see in Lychee.
I would like to upload a few videos to Lychee that are gigabytes in size. Although this is documented and can be done in one request, it requires tweaking nginx and PHP configuration. It also prevents Lychee from being hosted behind proxies like Cloudflare which limits request sizes to 100MB on the free plan.
Instead, I think a better solution is to chunk uploads into small pieces. This will mitigate any upload size issues and also potentially speed things up by uploading the chunks in parallel.
As for the implementation details, it could be modeled after S3's multipart upload.
Multipart upload is a three-step process: You initiate the upload, you upload the object parts, and after you have uploaded all the parts, you complete the multipart upload. Upon receiving the complete multipart upload request, Amazon S3 constructs the object from the uploaded parts, and you can then access the object just as you would any other object in your bucket.
another approach would be scp the video file and then use Import.
I am also in favor of @ildyria‘s workaround. Videos are not really Lychee's primary focus.
Moreover, chunked uploads add a lot of complexity to the code. At the moment, the code is not very robust with respect to failing or concurrent requests. Even at the moment, both kinds of requests will leave the database in an inconsistent state. A chunked upload will significantly contribute to this kind of problems and we also need to implement some kind of recovery strategy to re-upload missing chunks, etc.