nodejs-storage icon indicating copy to clipboard operation
nodejs-storage copied to clipboard

test: finish updating conformance test upload cases

Open tritone opened this issue 2 years ago • 2 comments

In #2190 I tried to add two new upload cases including 408 errors. One of them was fine, but the case "instructions": ["return-503-after-8192K", "return-408"] failed with the following error on various upload methods:

RangeError: The offset is lower than the number of bytes written. The server has 0 bytes and while 8388608 bytes has been uploaded - thus 8388608 bytes are missing. Stopping as this could result in data loss. Initiate a new upload to continue.

This looks like a bug in resetting the offset for the upload after a failure. It may be a bug in the testbench for some particular case; however, I tested manually and verified that the testbench correctly implements the method to query upload status at least for a simple case of uploading in chunks. Some debugging has to be done in the Node library to understand what is failing here.

See here for the test case to add: https://github.com/googleapis/conformance-tests/blob/main/storage/v1/retry_tests.json#L257

tritone avatar May 08 '23 20:05 tritone

I think I ran into something simliar:

RangeError: The offset is lower than the number of bytes written. The server has 183762944 bytes and while 184465013 bytes has been uploaded - thus 702069 bytes are missing. Stopping as this could result in data loss. Initiate a new upload to continue.
    at Upload.startUploading (/home/simon/Dev/hokify/hokify/node_modules/.pnpm/@[email protected]/node_modules/@google-cloud/storage/build/cjs/src/resumable-upload.js:463:32)
    at Upload.continueUploading (/home/simon/Dev/hokify/hokify/node_modules/.pnpm/@[email protected]/node_modules/@google-cloud/storage/build/cjs/src/resumable-upload.js:450:21)
    at processTicksAndRejections (node:internal/process/task_queues:95:5) RangeError

hard to reproduce, but it happens quite a lot lately... any idea what I could try to fix this?

simllll avatar Apr 18 '24 20:04 simllll

we experienced it during the production workflow run

iamstarkov avatar Aug 19 '24 09:08 iamstarkov