zfs
zfs copied to clipboard
Block Cloning
Motivation and Context
Block Cloning allows to clone a file (or a subset of its blocks) into another (or the same) file by just creating additional references to the data blocks without copying the data itself. Block Cloning can be described as a fast, manual deduplication.
Description
In many ways Block Cloning is similar to the existing deduplication, but there are some important differences:
- Deduplication is automatic and Block Cloning is not - one has to use a dedicated system call(s) to clone the given file/blocks.
- Deduplication keeps all data blocks in its table, even those referenced just ones. Block Cloning creates an entry in its tables only when there are at least two references to the given data block. If the block was never explicitly cloned or the second to last reference was dropped, there will be neither space nor performance overhead.
- Deduplication needs data to work - one needs to pass real data to the write(2) syscall, so hash can be calculated. Block Cloning doesn't require data, just block pointers to the data, so it is extremely fast, as we pay neither the cost of reading the data, nor the cost of writing the data - we operate exclusively on metadata.
- If the D (dedup) bit is not set in the block pointer, it means that the block is not in the dedup table (DDT) and we won't consult the DDT when we need to free the block. Block Cloning must be consulted on every free, because we cannot modify the source BP (eg. by setting something similar to the D bit), thus we have no hint if the block is in the Block Reference Table (BRT), so we need to look into the BRT. There is an optimization in place that allows to eliminate majority of BRT lookups that is described below in the "Minimizing free penalty" section.
- The BRT entry is much smaller than the DDT entry - for BRT we only store 64bit offset and 64bit reference counter.
- Dedup keys are cryptographic hashes, so two blocks that are close to each other on disk are most likely in totally different parts of the DDT. The BRT entry keys are offsets into a single top-level VDEV, so data blocks from one file should have BRT entries close to each other.
- Scrub will only do a single pass over a block that is referenced multiple times in the DDT. Unfortunately it is not currently (if at all) possible with Block Cloning and block referenced multiple times will be scrubbed multiple times.
- Deduplication requires cryptographically strong hash as a checksum or additional data verification. Block Cloning works with any checksum algorithm or even with checksumming disabled.
As mentioned above, the BRT entries are much smaller than the DDT entries. To uniquely identify a block we just need its vdevid and offset. We also need to maintain a reference counter. The vdevid will often repeat, as there is a small number of top-level VDEVs and a large number of blocks stored in each VDEV. We take advantage of that to reduce the BRT entry size further by maintaining one BRT for each top-level VDEV, so we can then have only offset and counter as the BRT entry.
Minimizing free penalty.
Block Cloning allows to clone any existing block. When we free a block there is no hint in the block pointer whether the block was cloned or not, so on each free we have to check if there is a corresponding entry in the BRT or not. If there is, we need to decrease the reference counter. Doing BRT lookup on every free can potentially be expensive by requiring additional I/Os if the BRT doesn't fit into memory. This is the main problem with deduplication, so we've learn our lesson and try not to repeat the same mistake here. How do we do that? We divide each top-level VDEV into 1GB regions. For each region we maintain a reference counter that is a sum of all reference counters of the cloned blocks that have offsets within the region. This creates the regions array of 64bit numbers for each top-level VDEV. The regions array is always kept in memory and updated on disk in the same transaction group as the BRT updates to keep everything in-sync. We can keep the array in memory, because it is very small. With 1GB regions and 1TB VDEV the array requires only 8kB of memory (we may decide to decrease the region size in the future). Now, when we want to free a block, we first consult the array. If the counter for the whole region is zero, there is no need to look for the BRT entry, as there isn't one for sure. If the counter for the region is greater than zero, only then we will do a BRT lookup and if an entry is found we will decrease the reference counters in the entry and in the regions array.
The regions array is small, but can potentially be larger for very large VDEVs or smaller regions. In this case we don't want to rewrite entire array on every change. We then divide the regions array into 128kB chunks and keep a bitmap of dirty chunks within a transaction group. When we sync the transaction group we can only update the parts of the regions array that were modified. Note: Keeping track of the dirty parts of the regions array is implemented, but updating only parts of the regions array on disk is not yet implemented - for now we will update entire regions array if there was any change.
The implementation tries to be economic: if BRT is not used, or no longer used, there will be no entries in the MOS and no additional memory used (eg. the regions array is only allocated if needed).
Interaction between Deduplication and Block Cloning.
If both functionalities are in use, we could end up with a block that is referenced multiple times in both DDT and BRT. When we free one of the references we couldn't tell where it belongs, so we would have to decide what table takes the precedence: do we first clear DDT references or BRT references? To avoid this dilemma BRT cooperates with DDT - if a given block is being cloned using BRT and the BP has the D (dedup) bit set, BRT will lookup DDT entry and increase the counter there. No BRT entry will be created for a block that resides on a dataset with deduplication turned on. BRT may be more efficient for manual deduplication, but if the block is already in the DDT, then creating additional BRT entry would be less efficient. This clever idea was proposed by Allan Jude.
Block Cloning across datasets.
Block Cloning is not limited to cloning blocks within the same dataset. It is possible (and very useful) to clone blocks between different datasets. One use case is recovering files from snapshots. By cloning the files into dataset we need no additional storage. Without Block Cloning we would need additional space for those files. Another interesting use case is moving the files between datasets (copying the file content to the new dataset and removing the source file). In that case Block Cloning will only be used briefly, because the BRT entries will be removed when the source is removed. Note: currently it is not possible to clone blocks between encrypted datasets, even if those datasets use the same encryption key (this includes snapshots of encrypted datasets). Cloning blocks between datasets that use the same keys should be possible and should be implemented in the future.
Block Cloning flow through ZFS layers.
Note: Block Cloning can be used both for cloning file system blocks and ZVOL blocks. As of this writing no interface is implemented that allows for ZVOL blocks cloning. Depending on the operating system there might be different interfaces to clone blocks. On FreeBSD we have two syscalls:
ssize_t fclonefile(int srcfd, int dstfd);
ssize_t fclonerange(int srcfd, off_t srcoffset, size_t length, int dstfd, off_t dstoffset);
Even though fclonerange() takes byte offsets and length, they have to be block-aligned. Both syscalls call OS-independent zfs_clone_range() function. This function was implemented based on zfs_write(), but instead of writing the given data we first read block pointers using the new dmu_read_l0_bps() function from the source file. Once we have BPs from the source file we call the dmu_brt_addref() function on the destination file. This function allocates BPs for us. We iterate over all source BPs. If the given BP is a hole or an embedded block, we just copy BP. If it points to a real data we place this BP on a BRT pending list using the brt_pending_add() function.
We use this pending list to keep track of all BPs that got new references within this transaction group.
Some special cases to consider and how we address them:
- The block we want to clone may have been created within the same transaction group as we are trying to clone. Such block has no BP allocated yet, so it is too early to clone it. In this case the dmu_read_l0_bps() function will return EAGAIN and in the zfs_clone_range() function we will wait for the transaction group to be synced to disks and retry.
- The block we want to clone may have been modified within the same transaction group. We could potentially clone the previous version of the data, but that doesn't seem right. We handle it as the previous case.
- A block may be cloned multiple times during one transaction group (that's why pending list is actually a tree and not an append-only list - this way we can figure out faster if this block is cloned for the first time in this txg or consecutive time).
- A block may be cloned and freed within the same transaction group (see dbuf_undirty()).
- A block may be cloned and within the same transaction group the clone can be cloned again (see dmu_read_l0_bps()).
- A file might have been deleted, but the caller still has a file descriptor open to this file and clones it.
When we free a block we have additional step in the ZIO pipeline where we call the zio_brt_free() function. We then call the brt_entry_decref() that loads the corresponding BRT entry (if one exists) and decreases reference counter. If this is not the last reference we will stop ZIO pipeline here. If this is the last reference or the block is not in the BRT, we continue the pipeline and free the block as usual.
At the beginning of spa_sync() where there can be no more block cloning, but before issuing frees we call brt_pending_apply(). This function applies all the new clones to the BRT table - we load BRT entries and update reference counters. To sync new BRT entries to disk, we use brt_sync() function. This function will sync all dirty top-level-vdev BRTs, regions arrays, etc.
Block Cloning and ZIL.
Every clone operation is divided into chunks (similar to write) and each chunk is cloned in a separate transaction. To keep ZIL entries small, each chunk clones at most 254 blocks, which makes ZIL entry to be 32kB. Replaying clone operation is different from the regular clone operation, as when we log clone operation we cannot use the source object - it may reside on a different dataset, so we log BPs we want to clone.
How Has This Been Tested?
I have a test program that can make use of this functionality that I have been using for manual testing.
Types of changes
- [ ] Bug fix (non-breaking change which fixes an issue)
- [x] New feature (non-breaking change which adds functionality)
- [ ] Performance enhancement (non-breaking change which improves efficiency)
- [ ] Code cleanup (non-breaking change which makes code smaller or more readable)
- [ ] Breaking change (fix or feature that would cause existing functionality to change)
- [ ] Library ABI change (libzfs, libzfs_core, libnvpair, libuutil and libzfsbootenv)
- [ ] Documentation (a change to man pages or other documentation)
Checklist:
- [x] My code follows the OpenZFS code style requirements.
- [ ] I have updated the documentation accordingly.
- [x] I have read the contributing document.
- [ ] I have added tests to cover my changes.
- [ ] I have run the ZFS Test Suite with this change applied.
- [ ] All commit messages are properly formatted and contain
Signed-off-by.
It's finally happening, Thanks for the hard work
First, thanks for this, it's been something I've looked forward to since I heard about it on the drawing board.
A couple of thoughts, before I go play with this on my testbeds and have more informed ones:
- I think BRT winds up being strictly better than dedup for almost every use case (excepting if people don't have the time/initial additional space to burn to examine their data to be BRT'd after the fact). Consequently, one thing I've thought would be nice since I heard about the proposal was if we could eventually implement transparent "migrate to BRT" from a deduplicated pool - but I believe deferring to the DDT if the DDT bit is set would break the ability to do that later without another feature flag change later. Does that seem like a worthwhile idea/reason to consider ways to allow it to work without breaking changes? (I also just really would like to see existing dedup use dwindle to zero after BRT lands, just so my biases are clear.)
- In a sharply contrasting but complimentary thought, another unfortunate loss if one uses BRT is the fact that you pay full size for your data on send/recv, until you perhaps go do an additional pass on it afterward...but that doesn't help your snapshots, which may be a substantial penalty. So another thing I've been curious about your thoughts on the feasibility of would be implementing additional hinting for zfs send streams that
zfs sendon its own would not generate, for "this received record should be a clone of this block", with the intent that one could implement, say, azstreampipeline for being handed a periodically-updated-by-userland dedup table, and checking incoming blocks against it.
Is this a userland dedup implementation, similar to the one that just got ejected from the codebase? Yes. Could you mangle your bytes if it incorrectly claimed a block was a duplicate? Also yes. Would allowing people to pay the computational cost to optionally BRT their incoming send streams rather than live with the overhead forever be worth it, IMO? A third yes.
(Not suggesting you write it, for this or in general, just curious about your thoughts on the feasibility of such a thing, since "all your savings go up in smoke on send/recv" seems like one of the few pain points with no mitigations at the moment...though I suppose you could turn on dedup, receive, and then use idea 1 if that's feasible. Heh.)
- You said currently scrub will iterate over BRT'd blocks multiple times - IANA expert on either BRT or the scrub machinery, but it seems like you could get away with a BRT-lite table where any time a block would get scrubbed, you check the BRT-lite table, and if it's not there, you check the BRT, and if it has an entry, you queue it and add an entry to the aforementioned lite table? (You could even just drop it all on the floor if the machine restarts, rather than persisting it periodically, and accept scrubbing more than once if the machine restarts.)
Come to think of it, I suppose if that works, you could just do that for dedup entries too and stop having to scan the DDT for them, if you wanted...which might make someone I know happy. (You might wind up breaking backward compatibility of resuming scrubs, though, I suppose. Blech.)
Likely missing some complexity that makes this infeasible, just curious why you think this might not be a thing one could implement.
Thanks again for all your work on this, and for reading my uninformed questions. :)
@rincebrain Here are my thoughts on easier manual deduplication and preserving deduplication during send/recv.
Let's say you would like to manually deduplicate the data on your pool periodically. To do that you would need to scan your entire pool, read all the data and build some database that you can check against to see if the there is another copy of the given block already.
Once you have such database, then you could use zfs diff to get only the files that has changed and scan only them. Even better if we had a mechanism to read all BPs of the given file without reading the data - that would allow us to determine if the given block is newer than the snapshot and only read the data of the blocks that are newer (and not all the data within modified file).
One could teach zfs recv to talk to this database and deduplicate the blocks on the fly.
Note that this database is pretty much DDT in userland.
Converting DDT to BRT is still doable. I think we would just need to teach ZFS that even though the BP has the D bit set, it may not be in the DDT.
As for the scrub, of course everything is doable:) Before you start the scrub you could allocate a bitmap where one bit represents 4kB of storage (2^ashift). Once you scrub a block you set the corresponding bit, so the next time you want to scrub the same block, you will know it already has been done. Such bitmap would require 32MB of RAM per 1TB of pool storage. With this in place we could get rid of the current mechanism for DDT. This bitmap could be stored in the pool to allow to continue scrub after restart (it could be synced to disk rarely as the worst that can happen is that we scrub some blocks twice).
@pjd Wow, I just noticed this (you work in stealth mode? :) ), surprised at the progress, thanks! I had a question related to what @rincebrain asked about zfs send/receive and was curious if you knew how BTRFS was doing their reflink send preservation?
(I think the -c and -p options? But I've also read on email lists that supposedly btrfs can preserve reflinks over send/receive.) https://blogs.oracle.com/linux/post/btrfs-sendreceive-helps-to-move-and-backup-your-data
So was wondering how they do it. thx
How can we donate anonymously? (to a specific dev, or openzfs earmarked for specific developer?) Also need to know if "donations" will be tax-exempt for recipient or not. If they are not, sender should deduct as biz expense. (Personally, I want my unsolicited donations to be accepted as "Tax-Exempt gifts" by recipients, but double taxation of the same money needs to be avoided, so donor and gift recipient don't both pay taxes on donated funds, that's reason for question.)
@pjd Wow, I just noticed this (you work in stealth mode? :) ), surprised at the progress, thanks! I had a question related to what @rincebrain asked about zfs send/receive and was curious if you knew how BTRFS was doing their reflink send preservation?
(I think the -c and -p options? But I've also read on email lists that supposedly btrfs can preserve reflinks over send/receive.) https://blogs.oracle.com/linux/post/btrfs-sendreceive-helps-to-move-and-backup-your-data
So was wondering how they do it. thx
Sorry, but I don't know how they do it.
How can we donate anonymously? (to a specific dev, or openzfs earmarked for specific developer?) Also need to know if "donations" will be tax-exempt for recipient or not. If they are not, sender should deduct as biz expense. (Personally, I want my unsolicited donations to be accepted as "Tax-Exempt gifts" by recipients, but double taxation of the same money needs to be avoided, so donor and gift recipient don't both pay taxes on donated funds, that's reason for question.)
I'd recommend donating to the FreeBSD Foundation, which is fully tax-deductible in US and FF is supporting OpenZFS development.
@pjd I wish freebsd all the best, but I wanted to donate to a 'specific person', I only use linux anyway. I would not mind 'paying taxes' on donation, without any tax-deduction, so that the individual/recipient does not, since it is a gift. Also via the foundation it seems you must give foundation full name/address etc, the anonymity they offer is just not to list your name on their website.
I'll try do a bit of research to see if I can find a more direct way than FreeBSD Foundation, or if that's the only way.
As far as btrfs reflinks over send/receive, I think they 'replay' a sort of playbook of actions on extents, that playback on receive. (I could be wrong since details escape me now, I'll have to take a peek again.)
Many thanks again for all your hard work and surprising us with this pull-request for feature many of us long wished for!
Excellent to see this feature being worked on! I just posted a request for "fast-copying" between datasets (#13516) because I had assumed any reflink style support wouldn't initially handle it, but it's good to see it mentioned here. In that case my issue can probably be closed.
To be clear, in my issue I'm assuming automatic "fast-copying" (cloning) rather than having to explicitly call cp --reflink or similar, because a lot of file copying/moving is done by programs that won't know to do this, so it's better for it to be automatic. For move/copy between datasets though this will obviously depend upon zfs knowing when that's the case (I'm not super familiar with the system calls themselves so don't know how easy that would be to detect?), but clearly it would be better if ZFS recognises as many cases where cloning is beneficial as possible, without having to be told.
One thing I'd like to discuss from my issue is copying between datasets where certain settings differ, with the main ones being copies, compression and recordsize.
If a target dataset has a copies setting that differs from the source then ideally copies should be created or ignored to match. So if the source has copies=2 and the target is copies=1 then the second copy won't be cloned, while if we flip that around (copies=1 -> copies=2) then we still need to create a new copy in addition to cloning the first, so that we don't end up with "new" data that is less redundant than an ordinary copy would have been. I didn't see any mention of copies being discussed, and can't tell if this is already implemented or not.
Meanwhile compression and recordsize represent settings that will mean that a cloned file will not be an exact match for the file as it would have been had it been copied normally, as the source dataset may not have compression but the target does, target may use a larger recordsize and so-on.
To cover these I propose a new setting, probably better named blockcloneto for this feature, which will control cloning between datasets. I proposed three basic settings:
on: block-cloning is always used where possible when this dataset is the target.off: block-cloning is never used (files are copied normally in all cases).exact: block-cloning is used whencompressionandrecordsizefor the source dataset matches the target.
There may be other settings that need to be considered, but these are the ones I use that seem applicable to cloning.
The aim is to allow users to ensure that cloning is not used to produce files in a dataset that are less redundant, less compressed etc. than files that are copied normally.
So, I tried a first pass at wiring up the Linux copy/clone range calls, and found a problem.
You see, Linux has a hardcoded check in the VFS layer for each of these calls that:
if (file_inode(file_in)->i_sb != file_inode(file_out)->i_sb)
return -EXDEV;
That is, it attempts to explicitly forbid cross-filesystem reflinks in a VFS check before it ever gets to our code.
The options I see for dealing with this are all pretty gross.
- Lie about having the same
i_sbfor all datasets on a pool, and work around doing this anywhere we actually consume this value (eww, and seems like it could have weird side effects) - Write our own ioctl reimplementing the equivalent functionality of
copy_file_range, without this restriction (ewwww, cp et al wouldn't use it so we'd needzfs_cpor similar, though we might be able to eventually convince coreutils to support it directly...maybe...) - I don't really have any other suggestions.
Thoughts?
The options I see for dealing with this are all pretty gross.
- Lie about having the same
i_sbfor all datasets on a pool, and work around doing this anywhere we actually consume this value (eww, and seems like it could have weird side effects)- Write our own ioctl reimplementing the equivalent functionality of
copy_file_range, without this restriction (ewwww, cp et al wouldn't use it so we'd needzfs_cpor similar, though we might be able to eventually convince coreutils to support it directly...maybe...)- I don't really have any other suggestions.
If the aim is to clone as many copy/move operations as possible then should we be worrying about explicit cloning commands in the first place?
I think the ideal scenario is to avoid the need for those calls entirely, by detecting when a file is being copied between datasets in the same pool (including the same dataset) so we can use cloning instead, this way programs don't need to make any explicit cloning calls, since many don't/won't, so picking this up automatically on the ZFS side would be best, as it would mean more files "deduplicated" with no extra work by users or programs.
The options I see for dealing with this are all pretty gross.
- Lie about having the same
i_sbfor all datasets on a pool, and work around doing this anywhere we actually consume this value (eww, and seems like it could have weird side effects)- Write our own ioctl reimplementing the equivalent functionality of
copy_file_range, without this restriction (ewwww, cp et al wouldn't use it so we'd needzfs_cpor similar, though we might be able to eventually convince coreutils to support it directly...maybe...)- I don't really have any other suggestions.
If the aim is to clone as many copy/move operations as possible then should we be worrying about explicit cloning commands in the first place?
I think the ideal scenario is to avoid the need for those calls entirely, by detecting when a file is being copied between datasets in the same pool (including the same dataset) so we can use cloning instead, this way programs don't need to make any explicit cloning calls, since many don't/won't, so picking this up automatically on the ZFS side would be best, as it would mean more files "deduplicated" with no extra work by users or programs.
Changing the scope of this from "userland generated reflinking of ranges" to "reimplementing dedup inline" would significantly impact how useful it is, IMO. In particular, it would go from "I'd use it" to "I'd force it off".
The goal here is not (I believe) "maximize dedup on everything", the goal here is "allow userland to specify when something is wanted to be a CoW copy of something else, because that's where most of the benefit is compared to trying to dedup everything".
Changing the scope of this from "userland generated reflinking of ranges" to "reimplementing dedup inline" would significantly impact how useful it is, IMO. In particular, it would go from "I'd use it" to "I'd force it off".
This seems like a huge overreaction; ZFS provides more than enough tools to control redundancy already, so there's simply no advantage to maintaining multiple copies of the same file beyond that (so long as the target dataset's settings are respected if necessary, see above).
I'm very much of the opposite position; if this feature is manual only then I guarantee you I will almost never use it, because so many tools simply do not support cloning even on filesystems that support it. If I have to jump to the command line to do it manually outside of a tool, or discard and replace copies after the fact, then that's an excellent way to guarantee that I'll almost never do it.
While there's certainly potential for offline deduplication scripts to make this a bit easier, e.g- scan one or more datasets for duplicates and replace with clones of one of them, why force it to be done retroactively? Otherwise it's limiting to just those with more specific needs, like cloning VMs or similar (where cloning datasets isn't suitable).
And if the cloning system calls aren't going to work for cross-dataset cases anyway, then automatic may be the only way to go regardless which is why I mentioned it; at the very least you could still turn it off by default, then turn it on temporarily when you want to clone to another dataset.
I don't know if I'd say they're not going to work...
$ ls -ali /turbopool/whatnow/bigfile2 /turbopool/whatnow2/bigfile3
ls: cannot access '/turbopool/whatnow2/bigfile3': No such file or directory
3 -rw-r--r-- 1 root root 10737418240 May 28 12:20 /turbopool/whatnow/bigfile2
$ sudo cmd/clonefile/clonefile /turbopool/whatnow/bigfile2 /turbopool/whatnow2/bigfile3
$ sudo cmd/zdb/zdb -dbdbdbdbdbdb turbopool/whatnow 3 > /tmp/file1
$ ls -ali /turbopool/whatnow/bigfile2 /turbopool/whatnow2/bigfile3
128 -rw-r--r-- 1 root root 10737418240 May 29 17:02 /turbopool/whatnow2/bigfile3
3 -rw-r--r-- 1 root root 10737418240 May 28 12:20 /turbopool/whatnow/bigfile2
$ sudo cmd/zdb/zdb -dbdbdbdbdbdb turbopool/whatnow2 128 > /tmp/file2
$ grep ' 200000 L0' /tmp/file{1,2}
/tmp/file1: 200000 L0 DVA[0]=<0:15800223c00:20000> [L0 ZFS plain file] edonr uncompressed unencrypted LE contiguous unique single size=20000L/20000P birth=377L/377P fill=1 cksum=1746b656272237d9:cd37f4f0b1f655f5:699bc3e57a9d0e06:72ebf1ea28603be2
/tmp/file2: 200000 L0 DVA[0]=<0:15800223c00:20000> [L0 ZFS plain file] edonr uncompressed unencrypted LE contiguous unique single size=20000L/20000P birth=4158L/377P fill=1 cksum=1746b656272237d9:cd37f4f0b1f655f5:699bc3e57a9d0e06:72ebf1ea28603be2
$ df -h /turbopool/whatnow2/bigfile3 /turbopool/whatnow/bigfile2
Filesystem Size Used Avail Use% Mounted on
turbopool/whatnow2 1.8T 21G 1.8T 2% /turbopool/whatnow2
turbopool/whatnow 1.8T 41G 1.8T 3% /turbopool/whatnow
It's more inconvenient, but it's certainly not unworkable.
I'm not saying there's no use for anyone if it's used to replace the existing inline dedup implementation - further up the thread, I advocate for doing just that, and implementing similar functionality to allow you to do inline dedup on send/recv with this.
But personally, the performance tradeoffs of inline dedup aren't worth it to me, especially compared to doing it as postprocessing later as needed, or with explicit notification, so if that was the only functionality, I would not be making use of it.
So, I tried a first pass at wiring up the Linux copy/clone range calls, and found a problem.
You see, Linux has a hardcoded check in the VFS layer for each of these calls that:
My understanding from reading the man page and kernel source code is that this is indeed a problem with all ioctl-based file clone/dedup calls but not with the newer more generic copy_file_range syscall.
copy_file_range explicitly has supports for cross-filesystem copies since 5.3.
If the src and dst file are from zfs but not on the same sb with copy_file_range zfs still has the chance to implement
"copy acceleration" techniques.
https://github.com/torvalds/linux/commit/5dae222a5ff0c269730393018a5539cc970a4726
Still not optimal since most tools that claim to be reflink compatible will first use the ioctl and only might fall back to copy_file_range. E.g. gnu coreutil cp and syncthing would still try copy_file_range but other tools might not.
https://dev.to/albertzeyer/difference-of-ficlone-vs-ficlonerange-vs-copyfilerange-for-copy-on-write-support-41lm
coreutils (8.30) on 5.4 was where I saw EXDEV without ever executing our copy_file_range or remap_file_range, and looking in the source, 8.30 doesn't know about copy_file_range at all.
So even if the kernel provides copy_file_range, plenty of older platforms aren't going to play, unfortunately.
(Not that I'm not ecstatic to be wrong and that we can have our cake and eat it too, and also amused that Oracle originated the patch, but I think we still get to have a fallback, albeit a much simpler one than what I implemented, for those cases.)
e: anyway, i'll polish up the thing I have for making this play nice under Linux with cp --reflink and extending clonefile to use copy_file_range, and then I'll post it probably tomorrow for people to use or not as they like.
e: I actually just tried coreutils git and it also refused cp --reflink with EXDEV, so I'm guessing their autodetection handling is...incorrect, because actually calling copy_file_range works great.
Huh, did I really never post anything more in here? Rude of me.
It seems the consensus is Linux should change here, and that coreutils can't do anything to make the behavior I write about there better without violating correctness, but I put the odds of that as infinitely approaching 0 without quite getting there, so I suppose we'll be stuck with clonefile and/or documenting cp --reflink=always not behaving as expected, unless there's some detail I'm overlooking.
I've been busy and haven't done what I wanted to, which was extend the clonefile command to do smaller than full range copies and write a bunch of tests to exercise that and report back. Maybe I'll get that done in the next few days, we'll see.
- Lie about having the same
i_sbfor all datasets on a pool, and work around doing this anywhere we actually consume this value (eww, and seems like it could have weird side effects)
i_sb is a superblock of filesystem. Do you think it is actually lying, if the data can be linked freely across the pool? I wonder if it will be interesting to find btrfs discussion on the topic - why they implemented it that way.
It's probably worth reading about some side effects of how btrfs handles subvolumes before thinking that might be a good route.
So if leveraging the system clone call isn't an option, what information do we have to work with?
My thinking was that if we can know the source dataset, target dataset, and the file/inode being copied or moved, then this should still be enough for ZFS to decide whether to clone or not, without having to be explicitly told?
I don't know enough about the file system calls to know how easy it is to get that information though; I guess I was just thinking that if we know a file is being copied within the same dataset, or copied/moved between two datasets under the same (or no) encryption root then ZFS can simply decide to clone automatically based on whatever rules are set on the target (e.g- always clone where possible, only clone if compression/recordsize etc. is the same, or never clone).
It's never really made sense to me why an explicit call should be required to clone in the first place, because if you trust a filesystem enough to store your data then you shouldn't need to create extra redundancy by default, especially on ZFS with mirror or raidz vdevs, copies=2 etc., if those aren't enough redundancy for you, then you could still disable cloning (either permanently or temporarily).
It should still probably be an opt-in feature, as administrators know their pools the best, but the three main options I outlined should be sufficient? Then if the clone system calls are improved in future, we could add a fourth setting "on" meaning cloning is permitted, but only when the appropriate system call is used? Actually thinking about it, this option could also be available immediately, as even if the system calls won't work for now, we could still presumably have a ZFS specific command to begin with (e.g- something under zdb) to handle the cloning entirely via ZFS? Not ideal, but for those that want manual cloning only, it would still make it possible in the interim.
My thinking was that if we can know the source dataset, target dataset, and the file/inode being copied or moved, then this should still be enough for ZFS to decide whether to clone or not, without having to be explicitly told?
I don't know enough about the file system calls to know how easy it is to get that information though;
The whole reason why reflink exists and why it is is an explicit operation is because:
To the filesystem, a cp isn't a copy -- it's one process reading from one file and writing to another. Figuring out that that is supposed to be a copy is very non-trivial and expensive, especially when taking into account metadata operations which aren't part of the regular file stream. https://lwn.net/Articles/332076/
And IMHO if you want to do things implicitly ZFS already has dedupe.
Block Cloning/ Reflink is purposely built around being an explicit operation so that the kernel knows of the intention of userspace. There is no guessing game that some open read and write syscall all correlate together.
Kernel 5.19 has changed the behavior of copy_file_range again and mostly restored the old 5.3 behavior. This change will/is be(ing) backported to older stable kernels.
- https://github.com/torvalds/linux/commit/868f9f2f8e004bfe0d3935b1976f625b2924893b
- https://lore.kernel.org/regressions/CAOQ4uxgya2-H9=qNZkRBO1exr=GRqyn=PFfGgAf0Px0VkH4bjQ@mail.gmail.com/
- https://github.com/qbittorrent/qBittorrent/issues/17352
- https://lkml.org/lkml/2022/7/24/296
But I do not think this changes anything already discussed:
This still applies:
If the src and dst file are from zfs but not on the same sb with copy_file_range zfs still has the chance to implement "copy acceleration" techniques.
If copy_file_range is not implemented by the filesystem (as it is today with zfs) then previously kernels [5.4;5.18] would fall back to generic_copy_file_range but now this fallback will only happen if the src and dst are on the same sb.
No, it rather does.
If Linux removed the functionality from the only call that told ZFS it could make this choice, then we're left with a custom ioctl for ZFS if we want this.
The behavior now (5.19 and backports) is still (importantly) a bit different from pre 5.3.
- Pre 5.3 always did:
inode_in->i_sb != inode_out->i_sb->-EXDEV - [5.4;5.18] even with
(file_in->f_op->copy_file_range != file_out->f_op->copy_file_range) && (file_inode(file_in)->i_sb != file_inode(file_out)->i_sb)-->generic_copy_file_range() - 5.19 does:
if (file_out->f_op->copy_file_range) {
if (file_in->f_op->copy_file_range !=
file_out->f_op->copy_file_range)
return -EXDEV;
} else if (file_inode(file_in)->i_sb != file_inode(file_out)->i_sb) {
return -EXDEV;
}
With 5.19 & [5.4;5.18] unlike 5.2 even if the sb is different the copy acceleration path can be taken. The kernel change in 5.19 only effects the cases where generic_copy_file_range() as fallback was previously taken.
generic_copy_file_range() is now only taken if copy_file_range is not implemented by the fs but the inodes are from the same sb. (https://github.com/torvalds/linux/blob/master/fs/read_write.c#L1516-L1527)
Interesting, that is still good, thank you - I had seen an earlier revision of this which just reverted the behavior entirely, and not seen that it had changed further since.
How does this interact with send/receive to older systems that do not support this?
(Unless this drastically changed since I looked) Since the data is so pool-specific, it doesn't change what send/recv sees, which also means you get reduplicated data on the other side. As I remarked upthread, a neat feature would be building a DDT-alike on recv solely for deduplicating with BRT then, so people who really want to keep their space savings, can.
@pjd Hi, is there still any activity on this, as this feature would be a great enhancement to openzfs ?
Many thanks, Eric
@pjd Hi, is there still any activity on this, as this feature would be a great enhancement to openzfs ?
Many thanks, Eric
I have just pushed the last set of changes (I hope) that allow for safe cross-dataset cloning.
Apologies since I lost track; did this land in a state compatible with existing Linux copy_file_range/reflink support (i.e. cp --reflink) or is a separate flag/syscall/api needed to utilize this? Thanks!
did this land in a state compatible with existing Linux reflink support (i.e. cp --reflink) or is a separate flag/syscall/api needed to utilize this? Thanks!
@adamdmoss It landed without Linux support yet. Only FreeBSD for now, but hopefully not for long.
did this land in a state compatible with existing Linux reflink support (i.e. cp --reflink) or is a separate flag/syscall/api needed to utilize this? Thanks!
@adamdmoss It landed without Linux support yet. Only FreeBSD for now, but hopefully not for long.
Cool - thanks for the clarification.