snap-sync
snap-sync copied to clipboard
Cleanup service
You know how difficult it is to delete multiple btrfs snapshots in different sub(!)directories. A cleanup service similar to snapper itself would be nice. You could set similar settings as snapper does for local configs, but not a time cleanup, more a number cleanup. Or you could provide a manual command to at least cleanup the backups easier.
# List all external backups
snap-sync -c home list
# Clean external backups x-y
snap-sync -c home clean 172-185
# Another idea would be to keep the last x backups
snap-sync -c home keep 3
The last option could be also used as systemd service (which should run after the backup). Depending on the setting you then have backups fr the last 3 weeks. Keeping every month is not possible like this because this might be way to complex to implement for now, but that is up to you.
The (default) setting or the number of backups to keep should be selectable in the snapper config itself, so you can keep more copies of home
than root
etc. Or you can completely disable the cleanup for critical snapper configs.
We can solve this request via a systemd-units. Since wip-dash-v2 snap-sync can create a target snapper structure for all snap-sync jobs. All list entries for the target-config (snap-$config) can be administrated via snapper logic. Thus we can assign a timer unit, that will start a cleanup unit with a given timeline. The timeline is declared in a snap-sync template/confg assigned to the target config.
It is not the solution but for people that want to keep just the latest backup I made a bash script that on Linux works for me, hope it helps someone... it's not very well written, I'm kinda newby feel free to make it better.
- for first it scans /etc/snapper/configs to fetch the list of configs
- finds the snapshots older than the last one and deletes them
- deletes the upper level directory
in order to work, the working directory ($PWD) must be the same as the snap-sync's destination folder i.e. /mnt/backup
to keep more than just the last one snapshot just augment tail +2
of the desired number +1 on both the 4th and 10th lines.
Hope it helps someone.
#!/bin/bash
find /etc/snapper/configs -printf "%f\n" | while read snapconfig
do
find $PWD/$snapconfig -mindepth 2 -maxdepth 2 -type d -exec ls -1trd "{}" \; | tail +2 | while read line
do
btrfs subvolume delete "$line"
echo "$line"_deleted
done
find $PWD/$snapconfig -mindepth 1 -maxdepth 1 -type d -exec ls -1trd "{}" \; | tail +2 | while read line
do
rm -r "$line"
echo "$line"_deleted
done
done
I second this. Cleaning up sync'ed snapshots on the external disk, either manually or with a script similar to @FraYoshi's, seems to get snap-sync out of synchronization in some way that I don't understand. After deleting a few snapshots on my external, new runs of snap-sync complains about not being able to find the parent subvolume.
Edit: I think my issue was because I deleted the external snapshot that corresponded with the latest incremental backup on my local drive.
I've written a script called clean-snap-sync-external.sh to clean snapshots created by snap-sync on external volumes. It is inspired by @FraYoshi's above, but fixes a logic error and checks for a few corner cases:
- Fix: the
find
with-exec ls -1trd
doesn't work as the author intended because it executes thels
on each file individually, so the sorting is not technically correct (though it appears correct) - ~~Change: introduce a variable for the number of snapshots to delete, starting with the oldest (versus number of snapshots to keep, in the original)~~
- Change: make sure we don't remove a snapshot that is marked as the "latest incremental backup" in snapper
- Change: make sure that the total number of snapshots is greater than the requested number to delete
- Change: add more console output with INFO and DEBUG color coding (in future I could enable debug messages with
-d
or-v
)
I've also recently made snap-sync-cleanup in Python for my own personal use that does something similar as well. It's released on PyPI in case others want to use it.