panel
panel copied to clipboard
Make maximal S3 part size configurable through env
Description
This PR allows to configure a maximum part size for the S3 multi-part upload instead of using a fixed one. This can be pretty usefull when an S3 service is used with limited bandwidth (e. g. 100Mbit/s).
With 100Mbit/s network, a 5GB part can take (depending on network conditions) up to 7 minutes. This can result in a 504 (Gateway timeout) for some S3 providers and let the backup fail due to their slow network.
By decreasing the part size to e. g. 1GB, it is possible to overcome such issues.
Fix
This PR adds the new environment variable BACKUP_MAX_PART_SIZE that can be configured with the maximum part size (in bytes). When set, the presigned URLs will be generated with the newly defined part size instead. Also, the custom part size will be appended to the response for Wings, so Wings also knows how big a single part should be.
When the new environment variable is not present or an invalid value is set, the previous fixed part size of 5GB will be used as default/fallback value.
Test Cases
- Do a backup without
BACKUP_MAX_PART_SIZEbeing defined in the environment- Backup is created sucessully
- The count of the parts uploaded by Wings (see Wings log) match the expected count (e. g. 20GB / 5GB = ~4 parts)
- Download the created backups and ensure the archive and its files is not corrupt
- Restore the backups to ensure restoring works as expected
- Do a backup with
BACKUP_MAX_PART_SIZEbeing set to 1GB (= 1073741824 bytes)- Backup is created sucessully
- The count of the parts uploaded by Wings (see Wings log) match the expected count (e. g. 20GB / 1GB = ~20 parts)
- Download the created backups and ensure the archive and its files is not corrupt
- Restore the backups to ensure restoring works as expected
- Do a backup with
BACKUP_MAX_PART_SIZEbeing set to 5GB (= 5368709120 bytes)- Backup is created sucessully
- The count of the parts uploaded by Wings (see Wings log) match the expected count (e. g. 20GB / 5GB = ~4parts)
- Download the created backups and ensure the archive and its files is not corrupt
- Restore the backups to ensure restoring works as expected
Additional Info
- If this PR gets accepted and merged, the documentation about backups has to be updated with the new environment variable.
- Documentation should then also being updated with the
BACKUP_PRESIGNED_URL_LIFESPANvariable, as this can be useful in conjuction with this change. - I did not find any tests for the
BackupRemoteUploadControllercontroller, therefore none have been updated in this PR
Related Issue
Fixes #4361
Side note:
There is a \Pterodactyl\Repositories\Eloquent\BackupRepository injected and stored in field $repository in current implementation of BackupRemoteUploadController.
This BackupRemoteUploadController::repository field seems not to be in use anywhere.
I'd suggest to either
- remove the field or
- replace the
Backup::query()->where('uuid', $backup)->firstOrFail()call using an repositorygetByIdfunction. I'm more into Symfony and not that into Eloquent and Laravel, so proper solution is up to the team.
Also, this change shouldn't be done in this PR, as it isn't in scope of the current changes.
Please rebase against latest changes, or enable the Allow maintainers to edit option.
Please rebase against latest changes, or enable the
Allow maintainers to editoption.
Done