zfs
zfs copied to clipboard
bad support for send/receive into encrypted
System information
Ubuntu 20.04 | Kernel 5.4.0 | architecture x86_64 | ZFS version: 0.8.3-1ubuntu12.4 | SPL version: 0.8.3-1ubuntu12.4
Describe the problem you're observing
Can't zfs send | zfs receive into encrypted file system.
Background: System was running unter Ubuntu 14.04, 16.04 with zfs, and had the zfs on a LUKS encrypted disk, so from zfs's view it was unencrypted zfs. After Upgrading to 20.04 and new zfs, I wanted to replace the disks with bigger ones (i.e. make a new zpool) and thus created a snapshot for the base zfs (with several zfs beneath), and had created the new zpool with the new disks with encryption (zpool create -O encryption=on).
zfs send -R pool1/hdat@20201004move | zfs receive pool2/hdat
worked smoothly, but did drop the encryption property. Although pool2 was designed as encrypted, pool2/hdat was not, was plaintext (very dangerous trap, since I had to erase all disks and recreate pool2 from scratch).
zfs send -R pool1/hdat@20201004move | zfs receive -o encryption=on pool2/hdat
created the target zfs, but then aborts with
cannot receive incremental stream: encryption property 'encryption' cannot be set for incremental streams.
So how is it supposed to be done?
Describe how to reproduce the problem
see above
Include any warning/errors/backtraces from the system logs
cannot receive incremental stream: encryption property 'encryption' cannot be set for incremental streams.
zfs send -R pool1/hdat@20201004move | zfs recv -x encryption pool2/hdat would have worked.
This issue has been automatically marked as "stale" because it has not had any activity for a while. It will be closed in 90 days if no further activity occurs. Thank you for your contributions.
Observing the same with OpenZFS 2.1.1: Sender: gentoo, kernel 5.15.6, zfs versions zfs-2.1.1-r5-gentoo zfs-kmod-2.1.1-r4-gentoo Receiver: ubuntu 20.04.3 with 917bb976110a5c1c00afd1dfed06daad8041b47e refs/tags/zfs-2.1.1 built from source (both tools and kernel module), kernel 5.4.0-91-generic
On the sender, there is unencrypted Test dataset with only two snapshots, @first and @second (correctly ordered in time).
On the receiver, there was no Test_enc dataset prior to receiving.
Then, initial send, successful:
# zfs send store_pool/store_set/Test@first | pv | ssh -p 33333 root@localhost "zfs recv -o encryption=on -o keyformat=passphrase -o keylocation=file:///root/pass -o checksum=sha256 test_pool/test_set/Test_enc"
root@localhost's password:
3,07GiB 0:00:40 [77,9MiB/s] [ <=> ]
Second send, failed:
# zfs send -i store_pool/store_set/Test@first store_pool/store_set/Test@second | pv | ssh -p 33333 root@localhost "zfs recv test_pool/test_set/Test_enc"
root@localhost's password:
cannot receive incremental stream: destination test_pool/test_set/Test_enc has been modified
since most recent snapshot
3,05MiB 0:00:02 [1,53MiB/s] [ <=> ]
update to previous comment:
If after creating (or re-using) encrypted Test_enc dataset, I try to receive into to-be-created test_pool/test_set/Test_enc/Test dataset, so that there should be presumably no change of properties with -o options, the same error at the second send of diffs persists.
Initial send:
zfs send store_pool/store_set/Test@first | pv | ssh -p 33333 root@localhost "zfs recv test_pool/test_set/Test_enc/Test"
Failed send:
zfs send -i store_pool/store_set/Test@first store_pool/store_set/Test@second | pv | ssh -p 33333 root@localhost "zfs recv test_pool/test_set/Test_enc/Test"
Also there seems to be a workaround here: using zfs recv -F solves my issue: the second send succeeds; after that, the files in the destination dataset are the same as in @second source snapshot.
If someone fixes this issue would be highly appreciated!
Shall be fixed by #14253
Can this issue be closed now?
I haven't seen this bug again, since I haven't used this feature anymore.
Nevertheless, I've seen strange behavior when receiving a plain text (unencrypted) dump into an encrypted zfs filesystem, when I had forgotten to enter the password and mount it before. I would have expected to get an error message, but a new sub-zfs is created and data seem to be written on disk unencrypted, which is a severe security breach in my eyes, but I am not sure whether
- this is a related or separate bug
- has been fixed with the mentioned fix. regards