minio-js
minio-js copied to clipboard
Multi Part Upload Fails to find ID
when doing a simple upload that is using a stream that does a multipart upload fails something dealing with an upload.OwnerID.
https://github.com/minio/minio-js/blob/6b916367ec13283b950d5932825e669a344d11ce/src/internal/xml-parser.ts#L356
TypeError: Cannot read properties of undefined (reading 'ID')
at /opt/app/node_modules/minio/dist/main/internal/xml-parser.js:324:30
at Array.forEach (<anonymous>)
at Object.parseListMultipart (/opt/app/node_modules/minio/dist/main/internal/xml-parser.js:320:41)
at Client.listIncompleteUploadsQuery (/opt/app/node_modules/minio/dist/main/internal/client.js:1049:23)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async Client.findUploadId (/opt/app/node_modules/minio/dist/main/internal/client.js:1109:22)
at async Client.uploadStream (/opt/app/node_modules/minio/dist/main/internal/client.js:1396:30)
at async S3FileAdapter.put (/opt/app/dist/app.js:5418:9)
at async CloudStorageService.put (/opt/app/dist/app.js:3240:9)
at async /opt/app/dist/app.js:2478:13 Unhandled Rejection at Promise Promise {
<rejected> TypeError: Cannot read properties of undefined (reading 'ID')
at /opt/app/node_modules/minio/dist/main/internal/xml-parser.js:324:30
at Array.forEach (<anonymous>)
at Object.parseListMultipart (/opt/app/node_modules/minio/dist/main/internal/xml-parser.js:320:41)
at Client.listIncompleteUploadsQuery (/opt/app/node_modules/minio/dist/main/internal/client.js:1049:23)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async Client.findUploadId (/opt/app/node_modules/minio/dist/main/internal/client.js:1109:22)
at async Client.uploadStream (/opt/app/node_modules/minio/dist/main/internal/client.js:1396:30)
at async S3FileAdapter.put (/opt/app/dist/app.js:5418:9)
at async CloudStorageService.put (/opt/app/dist/app.js:3240:9)
at async /opt/app/dist/app.js:2478:13
}
Note im using GCP for my s3 service and 7.1.3 works so i assume its from the refactor code .
const stream = fs.createReadStream(path.join(__dirname, "test.mp4"));
client = new Client({
endPoint: config.endpoint,
accessKey: config.key,
secretKey: config.secret,
});
awaitclient.putObject(
bucket,
filename,
stream
);
```
@lukepolo the owner
fields to appear as expected in minio and s3.
Please double check and share traces/output for us to review.
i do see that minio returns like:
<ListMultipartUploadsResult
xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Bucket>test-bucket</Bucket>
<KeyMarker></KeyMarker>
<UploadIdMarker></UploadIdMarker>
<NextKeyMarker></NextKeyMarker>
<NextUploadIdMarker></NextUploadIdMarker>
<Prefix>datafile-129-MB</Prefix>
<MaxUploads>1000</MaxUploads>
<IsTruncated>false</IsTruncated>
<Upload>
<Key>datafile-129-MB</Key>
<UploadId>N2ZiYjM2MDAtY2YwNC00YjBkLWFkNjUtODdjNjM2OTMzOTk2LmQzOTY4OTEyLWJhYTAtNDU1YS04ZTIxLTQ1ZjE5ODJjYjQ3Mw</UploadId>
<Initiator>
<ID></ID>
<DisplayName></DisplayName>
</Initiator>
<Owner>
<ID></ID>
<DisplayName></DisplayName>
</Owner>
<StorageClass></StorageClass>
<Initiated>2024-05-08T10:59:17.581Z</Initiated>
</Upload>
</ListMultipartUploadsResult>
and s3 returns like:
{
"prefixes": [],
"uploads": [
{
"key": "datafile-129-MB",
"uploadId": "5nHHMvfqkwJ9Nau465U_rTL3QorNHCUkxGnGpxCf6oQ.3m8QCSYGr4871HTuEWNirAjDsXz7skb7YIrFAfQiGknKjKHCU7MQ7fQ9dFtPt3KHDTAZJJ62WnDleID5j_nE",
"initiator": {
"id": "arn:aws:iam::55555555:user/pra",
"displayName": "prakash"
},
"owner": {
"id": "f79437bbcf4cb155027ef2sd5a8d29dac63ffbadefa825c406b64cd2bssa60453029",
"displayName": "devops"
},
"storageClass": "STANDARD",
"initiated": "2024-05-08T11:29:46.000Z"
}
],
"isTruncated": false,
"nextKeyMarker": "datafile-129-MB",
"nextUploadIdMarker": ""
}
Also i checked the parser, no change has been made other than adding typing. we will validate with GCS when we get credentials and update (add guard conditions for the nested fields) if required.
ill look when i have some time, for now we just reverted backwards and works as expected.
yah looks like its not a change in the upgrade
2024-06-10T13:57:15.860Z cloud:index <5> Uncaught Exception thrown TypeError: Cannot read properties of undefined (reading 'ID')
at /opt/app.js:2:4910367
at Array.forEach (<anonymous>)
at parseListMultipart (/opt/app.js:2:4910311)
at Transform._flush (/opt/app.js:2:4916105)
at Transform.prefinish (/opt/app.js:2:4491008)
at Transform.emit (node:events:519:28)
at prefinish (/opt/app.js:2:4496066)
at finishMaybe (/opt/app.js:2:4496151)
at endWritable (/opt/app.js:2:4499517)
at Writable.end (/opt/app.js:2:4499614)
not sure excatly where the issue is yet, and its not on every upload.
turned on trace will upload logs here soon~
2024-06-10T14:23:47.324Z cloud:ExpressHttpServer <3> web socket server setup: /debugger
2024-06-10T14:23:47.366Z cloud:ExpressHttpServer <3> http server listening on (5586)
2024-06-10T14:23:47.416Z cloud:FFMpeg <3> ffmpeg version git-2024-05-30-03712bc0d9 Copyright (c) 2000-2019 the FFmpeg developers
built with gcc 12 (Debian 12.2.0-14)
configuration: --prefix=./ffmpeg-dist --disable-everything --enable-gnutls --enable-protocol=http --enable-protocol=https --enable-protocol=file --enable-protocol=pipe --enable-demuxer=aac --enable-demuxer=h264 --enable-demuxer=hevc --enable-demuxer=image2 --enable-demuxer=image2pipe --enable-demuxer=mjpeg --enable-demuxer=rtsp --enable-parser=aac --enable-parser=h264 --enable-parser=hevc --enable-parser=mjpeg --enable-muxer=mp4 --enable-muxer=segment --enable-muxer=image2 --enable-muxer=image2pipe --enable-decoder=aac --enable-decoder=h264 --enable-decoder=hevc --enable-decoder=mjpeg --enable-encoder=mjpeg --enable-encoder=libx264 --enable-gpl --enable-libx264 --enable-filter=scale --enable-shared --disable-static --disable-autodetect
libavutil 56. 26.100 / 56. 26.100
libavcodec 58. 47.103 / 58. 47.103
libavformat 58. 26.101 / 58. 26.101
libavdevice 58. 6.101 / 58. 6.101
libavfilter 7. 48.100 / 7. 48.100
libswscale 5. 4.100 / 5. 4.100
libswresample 3. 4.100 / 3. 4.100
libpostproc 55. 4.100 / 55. 4.100
Hyper fast Audio and Video encoder
usage: ffmpeg [options] [[infile options] -i infile]... {[outfile options] outfile}...
Use -h to get full help or, even better, run 'man ffmpeg'
2024-06-10T14:23:47.429Z cloud:QueueWorker <3> starting (5) workers for queue: default
2024-06-10T14:23:47.430Z cloud:QueueWorker <3> starting (5) workers for queue: clip-convert
2024-06-10T14:23:47.431Z cloud:QueueWorker <3> starting (5) workers for queue: clip-delete
2024-06-10T14:24:17.472Z cloud:QueueWorker <5> Unable to process job (ConvertEventToClip): Error: job stalled more than allowable limit
at /opt/app.js:2:2916411
at Array.forEach (<anonymous>)
at WorkerPro.notifyFailedJobs (/opt/app.js:2:2916378)
at WorkerPro.moveStalledJobsToWait (/opt/app.js:2:2916317)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async WorkerPro.checkConnectionError (/opt/app.js:2:2702334)
at async WorkerPro.startStalledCheckTimer (/opt/app.js:2:2914826)
at async Timeout.<anonymous> (/opt/app.js:2:2914939)
2024-06-10T14:30:00.086Z cloud:ExpireClips <3> Expired Clips []
2024-06-10T14:32:55.466Z cloud:ClipService <3> using 6162 segments to create clip
REQUEST: GET /qx-clips-staging?location
host: storage.googleapis.com
user-agent: MinIO (linux; x64) minio-js/7.1.3
x-amz-date: 20240610T143255Z
x-amz-content-sha256: UNSIGNED-PAYLOAD
authorization: AWS4-HMAC-SHA256 Credential=REDACTED/20240610/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=**REDACTED**
RESPONSE: 200
x-goog-metageneration: 2
content-type: application/xml; charset=UTF-8
content-length: 90
access-control-allow-origin: *
access-control-expose-headers: Content-Type
x-guploader-uploadid: REDACTED
date: Mon, 10 Jun 2024 14:32:55 GMT
expires: Mon, 10 Jun 2024 14:32:55 GMT
cache-control: private, max-age=0
server: UploadServer
REQUEST: GET /qx-clips-staging?location
host: storage.googleapis.com
user-agent: MinIO (linux; x64) minio-js/7.1.3
x-amz-date: 20240610T143922Z
x-amz-content-sha256: UNSIGNED-PAYLOAD
authorization: AWS4-HMAC-SHA256 Credential=REDACTED/20240610/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=**REDACTED**
RESPONSE: 200
x-goog-metageneration: 2
content-type: application/xml; charset=UTF-8
content-length: 90
access-control-allow-origin: *
access-control-expose-headers: Content-Type
x-guploader-uploadid: REDACTED
date: Mon, 10 Jun 2024 14:39:22 GMT
expires: Mon, 10 Jun 2024 14:39:22 GMT
cache-control: private, max-age=0
server: UploadServer
REQUEST: GET /qx-clips-staging?uploads&delimiter=&max-uploads=1000&prefix=clip-1.mp4
host: storage.googleapis.com
user-agent: MinIO (linux; x64) minio-js/7.1.3
x-amz-date: 20240610T143922Z
x-amz-content-sha256: UNSIGNED-PAYLOAD
authorization: AWS4-HMAC-SHA256 Credential=REDACTED/20240610/US-CENTRAL1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=**REDACTED**
RESPONSE: 200
access-control-allow-origin: *
access-control-expose-headers: Content-Type
x-guploader-uploadid: REDACTED
date: Mon, 10 Jun 2024 14:39:22 GMT
expires: Mon, 10 Jun 2024 14:39:22 GMT
cache-control: private, max-age=0
content-length: 684
server: UploadServer
content-type: text/html; charset=UTF-8
2024-06-10T14:39:22.584Z cloud:index <5> Uncaught Exception thrown TypeError: Cannot read properties of undefined (reading 'ID')
at /opt/app.js:2:4288007
at Array.forEach (<anonymous>)
at parseListMultipart (/opt/app.js:2:4287951)
at Transform._flush (/opt/app.js:2:4284672)
at Transform.prefinish (/opt/app.js:2:4605757)
at Transform.emit (node:events:519:28)
at prefinish (/opt/app.js:2:4610815)
at finishMaybe (/opt/app.js:2:4610900)
at endWritable (/opt/app.js:2:4614266)
at Writable.end (/opt/app.js:2:4614363)
```
looking at xmlobj.Upload i get this back
Key: 'e7e5d605714a60e7d8284972512aa482.mp4',
UploadId: 'ABPnzm5GuWRdkSBkDua2zrt9ShAgHApBy7Do5V9Lowwq_tIst0mLUbWlFHraF0Q1Y2I4B7w',
StorageClass: 'STANDARD',
Initiated: '2024-06-11T13:48:38.520831Z'
}
it seems to be missing the Initiator
. It seems mabye there is an upload mabye on going at the same time?
ok it seems theres a multi part upload in process, im looking for a way to cancel it , but it seems odd that the Initiator / owner is missing
i was able to fix the issue by canceling any previous upload id associated with the file
const res = await this.client.makeRequestAsync({
method: 'GET',
bucketName: this.bucket,
query: `uploads&delimiter=&max-uploads=1000&prefix=${filename}`
})
const body = await new Promise((resolve, reject) => {
const body: Buffer[] = []
res
.on('data', (chunk: Buffer) => body.push(chunk))
.on('error', (e) => reject(e))
.on('end', () => resolve(Buffer.concat(body)))
})
const previousUploads = new XMLParser().parse(body.toString())?.ListMultipartUploadsResult.Upload;
for(const previousUpload of previousUploads) {
logger.info("PREVIOUS", previousUpload)
await this.client.abortMultipartUpload(this.bucket, filename, previousUpload.UploadId)
}
Thank you for the update @lukepolo .