s3cmd
s3cmd copied to clipboard
Data transfer of bigger sized files (10 TB) is too slow with 10 % data loss
Hi,
I am trying to transfer 10 TB file using s3cmd. It is taking more than 2 days to complete the transfer to remote server.
Command used
nohup python3 cloud-s3.py --upload s3cmd /data/5TB.txt pr-bucket1 --multipart-chunk-size-mb 1024 --limit-rate 100M --no-check-md5 > debug.txt 2>&1 &
What is cloud-s3.py? there is "s3cmd" in the parameters of your command, but you are not using a standard s3cmd command.
But, fyi, if the parameters are the one of s3cmd sync, you should not use "limit-rate" if you want your command to be faster. Also, the file size limit for aws s3 is 5 TB, and you should not use a so big "multipart chunk size" (1GB) with internet, because there is a greater chance of occasional corruption of data and so the full chunk upload to be retried in full.
Closing, as no follow up. Don't hesitate to reopen if you have more details.