s3cmd icon indicating copy to clipboard operation
s3cmd copied to clipboard

S3cmd sync fails with code 74

Open AlexKempshall opened this issue 4 years ago • 5 comments

My backup to Amazon S3 started failing Tuesday morning. This is the command

s3cmd sync --delete-removed --limit-rate=288K --multipart-chunk-size-mb=5 --no-progress LOCAL_DIR s3://BUCKET

Searched everywhere for an explanation/solution to this problem all I could come up with was a very terse

An error occurred while doing I/O on some file.

After much playing with --verbose and --debug flags discovered that if I fed the data piecemeal directory by directory it would work.

It only seems to happen if there are large numbers of files to be deleted during the Sync.

AlexKempshall avatar Aug 17 '21 11:08 AlexKempshall

Could you share the end of the debug log of your normal sync with the --debug flag? That should give us more clue of what could be going on.

fviard avatar Aug 17 '21 11:08 fviard

Once I've fed piecemeal the data it's OK until the next time. It's occurred three times this year. In the previous 4 or 5 years never had a problem

I'm hoping my data is now set up in such a way that I won't encounter the problem again. Famous last words!

Will have to set up a test bucket to recreate the problem. May take some time.

AlexKempshall avatar Aug 17 '21 16:08 AlexKempshall

It was quite easy to replicate added 4,000+ files to a directory. Synchronized the directory to the bucket. Deleted the files from the directory Synchronized the directory to the bucket.

Fails every time.

Attached is the back end of the debug log. amazon-backup.log.1.txt

AlexKempshall avatar Aug 19 '21 15:08 AlexKempshall

This may be related

https://github.com/s3tools/s3cmd/issues/681

AlexKempshall avatar Aug 19 '21 15:08 AlexKempshall

Not quite sure if I believe this, so might have to repeat the tests. Also might have messed up the logs.

Anyway the log, from a failed sync, with the debug flag set I got this message

INFO: Summary: 2 local files to upload, 0 files to remote copy, 5907 remote files to delete

As the problem appeared to be with deletes I supplied the flag "--max-delete=950" ran the sync and got this warning in the log -

WARNING: delete: maximum requested number of deletes would be exceeded, none performed.

Changed the flag to "--max-delete=6000"

The sync deleted all the objects and completed successfully!

removed the --max-delete flag then run the sync again. Completed successfully proving that all the deletes had been performed in the previous run.

Hope this helps.

I'll look in the bucket to confirm that they have indeed been deleted.

AlexKempshall avatar Aug 20 '21 07:08 AlexKempshall

did you try with --recursive :)

bumarcell avatar Aug 25 '22 13:08 bumarcell

This is an old issue I raised in August 2021 - last year.

did you try with --recursive :)

Good question. No.

I believe sync does this automatically.

Not been troubled with it since, so maybe bandwiidth couldn't cope with so many deletes

AlexKempshall avatar Aug 26 '22 20:08 AlexKempshall