s3cmd icon indicating copy to clipboard operation
s3cmd copied to clipboard

Data transfer of bigger sized files (10 TB) is too slow with 10 % data loss

Open sujata00 opened this issue 4 years ago • 1 comments

Hi,

I am trying to transfer 10 TB file using s3cmd. It is taking more than 2 days to complete the transfer to remote server.

Command used

nohup python3 cloud-s3.py --upload s3cmd /data/5TB.txt pr-bucket1 --multipart-chunk-size-mb 1024 --limit-rate 100M --no-check-md5 > debug.txt 2>&1 &

sujata00 avatar Aug 10 '21 09:08 sujata00

What is cloud-s3.py? there is "s3cmd" in the parameters of your command, but you are not using a standard s3cmd command.

But, fyi, if the parameters are the one of s3cmd sync, you should not use "limit-rate" if you want your command to be faster. Also, the file size limit for aws s3 is 5 TB, and you should not use a so big "multipart chunk size" (1GB) with internet, because there is a greater chance of occasional corruption of data and so the full chunk upload to be retried in full.

fviard avatar Sep 26 '21 23:09 fviard

Closing, as no follow up. Don't hesitate to reopen if you have more details.

fviard avatar Oct 04 '22 20:10 fviard