uppy
uppy copied to clipboard
S3 multipart: network error causes uppy to get stuck in corrupt state
Initial checklist
- [X] I understand this is a bug report and questions should be posted in the Community Forum
- [X] I searched issues and couldn’t find anything (or linked relevant results below)
Link to runnable example
No response
Steps to reproduce
Simple Test 1 Upload 1 file and force the POST call to the companion's complete endpoint to fail.
Simple test 2 Upload 1 file and corrupt one of the AWS upload URLs, for an individual part, in order to simulate a network error.
Advanced test: In an angular app, upload two files with s3 multi. One large and one small.
Only for the small file, simulate a failure scenario in the "complete" companion endpoint and force an error, or corrupt one of the AWS URLs for an individual part. Let the large file complete successfully.
Expected behavior
Simple test: Result object should be returned upon completion, and the failed file array should have a size of 1.
Advanced test: After first file fails, the dashboard's progress should still update for the remaining files and the result object should be returned on completion.
Actual behavior
Uncaught promise errors in log.
Overall progress stops being tracked in dashboard and complete event does not fire. No result object is returned containing a success and failure array.
Also, because uppy did not cleanly terminate, it gets stuck in a corrupt state. Even though the file(s) will display as failed, and no network traffic is running in the browser , the following call will return True.
uppy.getObjectOfFilesPerState().isUploadInProgress()
For a simulated network error for the companion's complete method, this error is in the browser console:
Uncaught (in promise) Error: Could not post to xxx/s3/multipart/xxxx/complete?key=xxx at Requestclient.js:203:13 at _ZoneDelegate.invoke (zone.js:372:26)
For an error uploading an individual AWS part, this error is in the browser console:
Error: Uncaught (in promise): Error: Non 2xx at Multipart uploader.js:497:21
does anyone have a work around for this ? reset uppy when network failures occur ?
Do you have a reproducible example? Errors from the console? Anything?
I had an issue with uppy and aws where I tried to terminate getUploadParameters
randomly with an error to test my error handling. I realized there is a bug in the uppy aws multipart package. Sharing if people find it useful.
I am currently just patching the package, but this is the diff (this is in the @uppy/aws-s3-multipart
package):
diff --git a/lib/index.js b/lib/index.js
index fa341973e2e8998a2d5b8f525b8f4703b2bdb028..cb60d2287014d1bd8394a2c1905942bd538f068c 100644
--- a/lib/index.js
+++ b/lib/index.js
@@ -179,7 +179,7 @@ export default class AwsS3Multipart extends BasePlugin {
}
return _classPrivateFieldLooseBase(this, _uploadLocalFile)[_uploadLocalFile](file);
});
- const upload = await Promise.all(promises);
+ const upload = await Promise.allSettled(promises);
// After the upload is done, another upload may happen with only local files.
// We reset the capability so that the next upload can use resumable uploads.
_classPrivateFieldLooseBase(this, _setResumableUploadsCapability)[_setResumableUploadsCapability](true);
The code needs to be waiting for all promises to settle and not throwing early. This seems to fix all my issues. Hope it helps!