aws-s3-multipart-copy icon indicating copy to clipboard operation
aws-s3-multipart-copy copied to clipboard

The specified upload does not exist. The upload ID may be invalid, or the upload may have been aborted or completed.

Open Fares92 opened this issue 4 years ago • 19 comments

when using copy multipart in the middle of the copy action , i have this error message "The specified upload does not exist. The upload ID may be invalid, or the upload may have been aborted or completed." i use log message to show errors so some parts was copied sucessfuly but after this error all the action was aborted , does anyone has an idea logs error log foreach copyPart

Fares92 avatar Sep 09 '20 11:09 Fares92

I am getting this error as well.

jeffbski-rga avatar Oct 13 '20 19:10 jeffbski-rga

@jeffbski-rga i fix it , i 'm using lambda function on AWS and the memory is limited so after many loops the memory is full so the system can't know our file that's why we have this error "The specified upload does not exis" , so i set the a dynamic number of parts depend on the size like that :+1: `let part= Math.trunc(size/10) let options = { source_bucket:inputBucket , object_key: sourceFile, destination_bucket: bucket, copied_object_name: outputKey, object_size: size, copy_part_size_bytes: part, copied_object_permissions: 'bucket-owner-full-control', }; console.log('options', options) 'use strict'; console.log('copying to output bucket', bucket, sourceFile, outputKey, inputBucket);

    return s3Module.copyObjectMultipart(options)`

So i have always only 10 loops and it works , i can help you if you have a screenshot of your error

Fares92 avatar Oct 14 '20 09:10 Fares92

It happens for me as well and I can reproduce the error on the small files. From what I see the error happens, if the file size is less than copy_part_size_bytes parameter.

hajjimurad avatar Nov 20 '20 14:11 hajjimurad

@hajjimurad the problem that the Lambda function have a limited memory (max 512 mo) so if the for loop has many loops the lambda function is out of service that why it return this error , i solve it by limit the number of loops at 10 and i divide the size /10 wo we have always 10 part

Fares92 avatar Nov 20 '20 15:11 Fares92

@Fares92 in my case I was getting the error in a regular script, not in Lambda.

hajjimurad avatar Nov 20 '20 16:11 hajjimurad

@Fares92 I figured out that my problem was when I use it with a file size less than 5MB which is not supported by multipart apparently. So locally I use normal s3 copy for smaller than 5MB and then multipart copy for larger files.

jeffbski-rga avatar Nov 20 '20 19:11 jeffbski-rga

The 5MB minimum file size limit is self-imposed by this module. It's a bug. I fixed it and made a pull request: #45

jox avatar Feb 10 '22 19:02 jox

i am still getting this error

namdiemefo avatar Mar 05 '22 21:03 namdiemefo

@namdiemefo do you mean you applied my fix and you still get the error?

jox avatar Mar 05 '22 21:03 jox

yes i did and im still getting the error .

i followed the tutorials of both https://gist.github.com/nick-ChenZe/71670af034744ee3fe9d801de632836d and https://medium.com/@vishnuit18/uploading-files-to-aws-s3-bucket-using-nodejs-8c399eea2d19 . still got the same issue

namdiemefo avatar Mar 05 '22 22:03 namdiemefo

`

function startUpload(req, res, next) {

const params = {
    Key: `${Date.now().toString()}.mp4`,
    Bucket: bucket_name,
}

let file = req.file;
// var buffer = Buffer.from(file.toString(), 'base64')
console.log(__dirname)
var buffer = fs.readFileSync('/Users/namdiemefo/Desktop/naemo/tag/tagging-server/tagging-panel/app/services/MFMFCVSKWARAUNITED.mp4')

const chunkSize = Math.pow(1024, 2) * 10;
const bufferSize = buffer.length;
console.log(bufferSize)
let numPartsLeft = Math.ceil(bufferSize / chunkSize);
console.log(numPartsLeft);
const arr = Array.from(Array(numPartsLeft).keys());

s3.createMultipartUpload(params, function (err, multipart) {
    if (err) {
        console.log('Error!', err);
        return;
    }
    console.log("Got upload ID", multipart.UploadId);

    for (var rangeStart = 0; rangeStart < buffer.length; rangeStart += chunkSize) {
        partNum++;
        var end = Math.min(rangeStart + chunkSize, buffer.length),
        partParams = {
            Body: buffer.slice(rangeStart, end),
            Bucket: bucket_name,
            Key: `${Date.now().toString()}.mp4`,
            PartNumber: String(partNum),
            UploadId: multipart.UploadId,
            
        };
    
        // Send a single part
        console.log('Uploading part: #', partParams.PartNumber, ', Range start:', rangeStart);
        uploadPart(multipart, partParams, numPartsLeft);
    }

    // arr.map(item => {

    //     partNum++;
    //     // var end = Math.min(rangeStart + chunkSize, buffer.length),
        //    var partParams = {
        //         Body: buffer.slice((item - 1) * chunkSize, item * chunkSize),
        //         Bucket: bucket_name,
        //         Key: `${Date.now().toString()}.mp4`,
        //         PartNumber: String(partNum),
        //         UploadId: multipart.UploadId
        //     };

    //     // Send a single part
    //     console.log('Uploading part: #', partParams.PartNumber, ', Range start:', item);
    //     uploadPart(s3, multipart, partParams, numPartsLeft);

    // })

})


    // Grab each partSize chunk and upload it as a part
//     for (var rangeStart = 0; rangeStart < buffer.length; rangeStart += chunkSize) {
//         partNum++;
//         var end = Math.min(rangeStart + chunkSize, buffer.length),
            // partParams = {
            //     Body: buffer.slice(rangeStart, end),
            //     Bucket: bucket_name,
            //     Key: `${Date.now().toString()}.mp4`,
            //     PartNumber: String(partNum),
            //     UploadId: multipart.UploadId
            // };

//         // Send a single part
//         console.log('Uploading part: #', partParams.PartNumber, ', Range start:', rangeStart);
//         uploadPart(s3, multipart, partParams, numPartsLeft);

//     }

// })

}

function uploadPart(multipart, partParams, numPartsLeft, tryNumber) { var tryNum = tryNumber || 1; console.log(${tryNum} ${numPartsLeft}) // console.log(partParams) s3.uploadPart(partParams, function (err, data) { console.log("in") if (err) { console.log('multiErr, upload part error:', err); if (tryNum < maxUploadTries) { console.log('Retrying upload of part: #', partParams.PartNumber) uploadPart(multipart, partParams, numPartsLeft, tryNum + 1); } else { console.log('Failed uploading part: #', partParams.PartNumber) } return; } console.log("inner") multipartMap.Parts[this.request.params.PartNumber - 1] = { ETag: data.ETag, PartNumber: Number(this.request.params.PartNumber) }

    console.log("Completed part", this.request.params.PartNumber);
    console.log('mData', data);
    if (--numPartsLeft > 0) return; // complete only when all parts uploaded

    var doneParams = {
        Bucket: bucket_name,
        Key: `${Date.now().toString()}.mp4`,
        MultipartUpload: multipartMap,
        UploadId: multipart.UploadId
    };

    console.log("Completing upload...");
    completeMultipartUpload(doneParams);

})

}

function completeMultipartUpload(doneParams) { s3.completeMultipartUpload(doneParams, function (err, data) { if (err) { console.log("An error occurred while completing the multipart upload"); console.log(err); } else { var delta = (new Date() - startTime) / 1000; console.log('Completed upload in', delta, 'seconds'); console.log('Final upload data:', data); } }); }

`

namdiemefo avatar Mar 05 '22 22:03 namdiemefo

May I doubt that you applied my fix? It seems like you didn't even use the aws-s3-multipart-copy module.

Why not use a gist or something to post your code?

https://gist.github.com/

jox avatar Mar 05 '22 23:03 jox

my bad , i used the aws-sdk . here is the full code https://gist.github.com/namdiemefo/4db23767b9be11857383c5e3d5378114

namdiemefo avatar Mar 05 '22 23:03 namdiemefo

My advise: Use the aws-s3-multipart-copy module.

jox avatar Mar 05 '22 23:03 jox

The module is for copying from one bucket to another . I’m trying to upload large files from the browser to s3 

Sent from Yahoo Mail for iPhone

On Sunday, March 6, 2022, 12:47 AM, Jonas Petersen @.***> wrote:

My advise: Use the aws-s3-multipart-copy module.

— Reply to this email directly, view it on GitHub, or unsubscribe. Triage notifications on the go with GitHub Mobile for iOS or Android. You are receiving this because you were mentioned.Message ID: @.***>

namdiemefo avatar Mar 05 '22 23:03 namdiemefo

any solution on this ? I'm getting same error The specified upload does not exist. The upload ID may be invalid, or the upload may have been aborted or completed.

chiragdev8062 avatar Dec 29 '22 05:12 chiragdev8062

same error here, any solution?

pablogeek avatar Jan 31 '23 17:01 pablogeek

I don't know if it helps anyone but what I did was this. In the s3 console -> bucket -> permissions - CORS I added this line: { ... "ExposeHeaders": [ "ETag" ], ... }

aguerrah9 avatar Sep 18 '23 22:09 aguerrah9

I got the same error 😞

ramonpaolo avatar Oct 08 '23 11:10 ramonpaolo