sanoid
sanoid copied to clipboard
syncoid: Better handle replication when the `--no-rollback` option is specified and bookmarks exist on the source
The problem:
Infrequently backups with the --no-rollback
option.
When the latest snapshot replicated is a daily snapshot it will probably not exist anymore on the source
when doing the next replication.
So syncoid finds a common snapshot, but can't use it successfully as a base for replication because it is not the latest one on the target
and it is not allowed to rollback the target.
(The error is cannot receive incremental stream: destination … has been modified since most recent snapshot
)
I added the --create-bookmark
option, so syncoid creates a bookmark for the latest replicated snapshot.
But syncoid only checks for bookmarks if it can't find any common snapshots between source
and target
. (which is not the case)
What is possible with the --no-rollback
option:
A send -i
from the bookmark on the source corresponding to the latest snapshot on the target.
The snapshot to send can be either the latest snapshot on the target (so no intermediate snapshots are transferred)
or the oldest snapshot which is newer than the bookmark (In this case syncoid could afterward continue with a normal send -I
from the new snapshot, which would also preserve the intermediate snapshots).
Alternative solutions:
- Keeping more hourly snapshots
- With #153: Don't send hourly snapshots
Below is a bash script attached for reproducing the problem.
It's probably a bit more verbose than necessary.
Click to expand
#!/bin/bash
# Cleanup before:
# zfs destroy pool
# rm -r /tmp/zfstest
if [ -d "/tmp/zfstest" ]; then
echo "Error! /tmp/zfstest directory already exists!"
exit 1
fi
mkdir /tmp/zfstest
# Latest master version independent from any installed version
curl -s https://raw.githubusercontent.com/jimsalterjrs/sanoid/master/syncoid -o /tmp/zfstest/syncoid
chmod 755 /tmp/zfstest/syncoid
truncate -s 512M /tmp/zfstest/pool
zpool create pool /tmp/zfstest/pool
zfs create pool/a
zfs snapshot pool/a@s0
zfs list -t filesystem,snapshot,bookmark
echo
echo
/tmp/zfstest/syncoid --no-privilege-elevation --no-sync-snap --no-rollback --create-bookmark pool/a pool/b
echo
zfs list -t filesystem,snapshot,bookmark
echo
echo
echo "Test 1" > /pool/a/file1
zfs snapshot pool/a@s1
zfs list -t filesystem,snapshot,bookmark
echo
echo
/tmp/zfstest/syncoid --no-privilege-elevation --no-sync-snap --no-rollback --create-bookmark pool/a pool/b
echo
zfs list -t filesystem,snapshot,bookmark
echo
echo
echo "Test 2" > /pool/a/file2
zfs snapshot pool/a@s2
zfs list -t filesystem,snapshot,bookmark
echo
echo
zfs destroy pool/a@s1
zfs list -t filesystem,snapshot,bookmark
echo
echo
# This doesn't work. It tries to use a@s0 as base snapshot
/tmp/zfstest/syncoid --no-privilege-elevation --no-sync-snap --no-rollback --create-bookmark pool/a pool/b
echo
zfs list -t filesystem,snapshot,bookmark
echo "Test 3" > /pool/a/file3
zfs snapshot pool/a@s3
zfs list -t filesystem,snapshot,bookmark
echo
echo
# This doesn't work. It tries to use a@s0 as base snapshot
/tmp/zfstest/syncoid --no-privilege-elevation --no-sync-snap --no-rollback --create-bookmark pool/a pool/b
echo
zfs list -t filesystem,snapshot,bookmark
# Syncoid tries to do this, which doesn't work:
# zfs send -I pool/a@s0 pool/a@s3 | mbuffer -q -s 128k -m 16M 2>/dev/null | pv -p -t -e -r -b -s 9328 | zfs receive -s pool/b
# This should work (but doesn't send intermediate snapshots):
# zfs send -i pool/a#s1 pool/a@s3 | mbuffer -q -s 128k -m 16M 2>/dev/null | pv -p -t -e -r -b -s 9328 | zfs receive -s pool/b
# It could probably be split into two:
# 1. Replicate with '-i' to oldest snapshot on target which doesn't exist on the remote (or the next snapshot after the bookmark)
# zfs send -i pool/a#s1 pool/a@s2 | mbuffer -q -s 128k -m 16M 2>/dev/null | pv -p -t -e -r -b -s 9328 | zfs receive -s pool/b
# 2. Replicate with '-I' from this snapshot to the latest snapshot (this is a normal replication)
# zfs send -I pool/a@s2 pool/a@s3 | mbuffer -q -s 128k -m 16M 2>/dev/null | pv -p -t -e -r -b -s 9328 | zfs receive -s pool/b
bump
I just ran into the need for this too. Is there something someone can do to help move it along?