B2_Command_Line_Tool icon indicating copy to clipboard operation
B2_Command_Line_Tool copied to clipboard

keep specific files longer than others

Open Catscrash opened this issue 7 years ago • 2 comments

Hi,

currently I'm using b2 sync --keepDays 30 with incremental encrypted tar backups. So there is in the source:

2018.tar.gz.gpg 2018_02.tar.gz.gpg 2018_02_04.tar.gz.gpg

2018_02_04.tar.gz.gpg will be deleted tomorrow and replaced by _05, ..._05 will be uploaded and _04 will be kept for 30 days. Which is fine.

But I would like to get rid of 2018.tar.gz.gpg and 2018_02.tar.gz.gpg locally, because I also have the unencrypted versions on disc. However those should remain on the bucket for 90 days (_02.tar.gz), or even over a year (2018.tar.gz).

Is there a way to do that? Maybe by defining some kind of empty placeholder but preventing an upload somehow?

thanks!

best regards

Catscrash avatar Feb 04 '18 13:02 Catscrash

okay, so I removed the file and created an empty 2018.tar.gz.gpg and did touch -d to an old date, so that It won't replace the file in the bucket when using --skipNewer. It's not exactly elegant, but it does work. It's also hard to automate, because the script has no way of knowing if the file has transfered successfully. Maybe someone has a nicer solution.

Catscrash avatar Feb 04 '18 13:02 Catscrash

If you have different prefixes/folders for your yearly and monthly backups, you could use Lifecycle Rules for deleting those files. Other than that I don't think there is an automatic solution already implemented. You could of course not use the --keepDays 30 option and write a separate script which deletes files when appropriate.

It's also hard to automate, because the script has no way of knowing if the file has transfered successfully.

I don't understand what you mean. After running the b2 CLI it will return a non zero result if the command failed.

svonohr avatar Feb 04 '18 15:02 svonohr