s3cmd
s3cmd copied to clipboard
delete local file after upload
I'm looking for an easy way to move files to an s3 bucket. With rsync this would be achieved with --remove-source-files. Please consider adding this to s3cmd.
+1 I could use this capability, too.
+1, looks like this has also been requested here
+1
This is particularly abrasive as the put
verb does not return > 0 on failure:
$ s3cmd put THIS_FILE_DOES_NOT_EXIST s3://MY_VALID_BUCKET/; echo $?
0
For the record, [put] now returns a more valid exit code in this case.
$ ./s3cmd --version s3cmd version 1.5.0
$ ./s3cmd put THIS_FILE_DOES_NOT_EXIST s3://MY_VALID_BUCKET/; echo $? ERROR: Parameter problem: Nothing to upload. 64
On Tue, Jan 13, 2015 at 2:52 PM, Josh Enders [email protected] wrote:
This is particularly abrasive as the put verb does not return > 1 on failure:
$ s3cmd put THIS_FILE_DOES_NOT_EXIST s3://MY_VALID_BUCKET/; echo $? 0
— Reply to this email directly or view it on GitHub https://github.com/s3tools/s3cmd/issues/262#issuecomment-69817610.
+1, this would be really helpful
+1
+1 does anyone have a workaround solution now?
+1
+1
Or just implement move command for local files like awscli.
s3cmd mv /var/log/mylog.log s3://my-bucket/logs/
rclone.org also implements a 'move' command. Alas it doesn't really deal well with directories that contain more than a few files.
s3cmd copes with this perfectly, but it doesn't move local files and --delete-after
only works with sync
which doesn't help me much.
@mdomsch ; thank you for creating s3cmd! It has been a great help with deploying and testing our new setup! The new versions also work great with Ceph without the hostname based buckets.
+1
@BartVB s3cmd sync --delete-after
does not delete local files after upload, but
delete destination files that are not found anymore at the source. The difference of this option compared to normal delete-removed is that files are deleted only once all files have been successfully uploaded instead of at the beginning before upload.
(from https://github.com/s3tools/s3cmd/issues/958)
+1 for this feature (surprised it didn't exist) ... would like to sync logs from a jump host to S3 then clean jump host up once I know they are backed up
+1
+1
+1
+1
+1
+1
7 years and count..
+1
find /logs -type f | while read -r FILE; do
s3cmd put "$FILE" s3://bucketname
if [ $? -eq 0 ]; then
echo "$FILE successfully uploaded at $(date +'%d-%m-%Y-%H-%M-%S')" >> s3cmd-$(date +'%d-%m-%Y').log
rm -f "$FILE"
else
echo "$FILE ERROR: failed to upload file $FILE at $(date +'%d-%m-%Y-%H-%M-%S')" >> s3cmd-$(date +'%d-%m-%Y').log
exit 1
fi
done
8 years +1
I'll fix this for a 200$ bounty.
ok to spare my inbox I have added the requested feature. I need someone with an s3 bucket to test them as I don't have a s3 bucket available but I should be able to fix any thing quickly.
Overview of changes:
Added delete_after_put config option that is loaded from config if delete_after_put is True, then after the put the loop will call os.unlink (not sure if it works on single upload -- need tester no s3)
Did it all in the new github editor need someone to make sure I didn't miss a tab.
I will take donations at [email protected]