initramfs-scripts: more elegant `canmount` property handling?
A debian user reported a bug that datasets with property canomunt=noauto set would also be (or at least tried to be) mounted during boot by initramfs scripts.
This behavior is described in:
https://github.com/openzfs/zfs/blob/1f3444f2bba42817520a8097a29d24a3b4115927/contrib/initramfs/scripts/zfs#L337-L343
I understand this logic is mainly for mounting rootfs which has noauto set. Maybe we could avoid mounting other similar datasets, e.g., by comparing the dataset name with the actual ROOT?
# Skip filesystems with canmount=off. The root fs should not have canmount=off, but ignore it for backwards compatibility just in case.
That comment seems confusing. It rather seems boot environments all get canmount=~off~noauto set by default (EDIT: off is for placeholder/inheritance filesystems), and the bootfs exception made in the code is important to allow mounting it anyway if set as ${ZFS_BOOTFS}.
And for the problem at hand it might just be that the [ "$canmount" = "off" ] test should actually rather test if the value is "off" or "noauto" to also skip on non-root, noauto-filesystems.
Could there be a more elagant canmount handling?
~1. can canmount=yes be used to mark the active boot environment? but there must be ever only max. one enabled to allow selectively mounting only one boot environment~
~1. could zpool import refuse to mount any / mount if it's already mounted (user forgot the -R option to set an altroot)? i.e. to prevent mounting a foreign rootfs / over the current rootfs / when (force) importing?~
(EDIT: see below)
CC: original authors @rlaager @FransUrbo, and maintainers @behlendorf @tonyhutter. What's your opinion on this?
It rather seems boot environments all get canmount=off set by default
On my Ubuntu daily driver, which I believe was installed using the installer's ZFS support, I have canmount=on. Looking at the HOWTOs I maintain (yes, I know I'm behind right now), the Ubuntu 22.04 instructions seem to result in the default of canmount=on. On Debian, which is not using zsys, the HOWTO explicitly uses canmount=noauto.
the exception made in the code is important to allow mounting it anyway if set as
${ZFS_BOOTFS}.
I think I would expect canmount=noauto, so the exception is important if this behavior is changed as discussed below.
And for the problem at hand it might just be that the
[ "$canmount" = "off" ]test should actually rather test if the value is "off" or "noauto" to also skip on non-root, noauto-filesystems.
I think that is what is being requested in the Debian bug.
That seems reasonable to me. Fundamentally, if the filesystem is noauto, that should mean the filesystem is no(t) auto(matically) mounted.
Of course, it's tricky messing with default behavior like this because you can break people's existing systems.
I recently wrote a replacement set of (systemd-based) zfs initcpio scripts for Arch, so I gave this problem some thought.
As I understand it, canmount=noauto is used mostly as a hack to allow having multiple OS installations to co-exist on the same pool in different datasets (which is a pretty cool ability of dataset-oriented filesystems; I've been using it on BTRFS previously and I'm using it on ZFS now).
The idea is that canmount=noauto is "like an off, but not off". You apply canmount=noauto to all filesystems in the alternate OS subtrees — the ones that you want to be treated as canmount=off (to prevent userspace from automounting them, because they will overmount the OS you wanted to boot into), but at the same time distinguished from true canmount=off datasets, which exist purely for organizational reasons and are never meant to be mounted at all.
With this strategy, the boot scripts must ignore canmount=noauto (treat canmount=noauto for the subtree you want to boot as canmount=on), but the rest of userspace must NOT ignore canmount=noauto (for all other subtrees).
Does this make sense?
Seems to make sense, from trying to interpret things, not knowing original intentions.
It's the problem of managing different boot environments. Made a ghostbsd test install VM and it looks like this:
# zfs get canmount
NAME PROPERTY VALUE SOURCE
zroot canmount on default
zroot/ROOT canmount on default
zroot/ROOT/25.02-backup-2025-11-28-11-05 canmount noauto local
zroot/ROOT/default canmount noauto local
zroot/ROOT/default@2025-11-28-11:05:56-0 canmount - -
zroot/home canmount on default
zroot/tmp canmount on default
zroot/usr canmount off local
zroot/usr/ports canmount on default
zroot/var canmount off local
zroot/var/audit canmount on default
zroot/var/crash canmount on default
zroot/var/log canmount on default
zroot/var/mail canmount on default
zroot/var/tmp canmount on default
Not sure why /usr and /var are 'off', maybe because they are just too dangerous to get mounted accidentially. But the whole thing seems flawed with zpool import over-mounting the running system by default instead of shielding it with a default -R altroot.
It's way too difficult and error prone to find a safe way to properly mount an arbitrary zpool. Maybe?:
# zpool import -N -R /inspect -f zroot
# zfs mount zroot/ROOT/default
# zfs umount -a
Another thing that seems wrong could be that instead of only configuring one currently active root or boot environment, zfs requires to shielding off all of them, then select one dynamically for the boot.
Could maybe sub-mounts be defined safely by relative mounting paths, instead?
zroot/home mountpoint ../ROOT/default/home # with 'default' rather called 'active'?
I vaguely remember that when I wrote this almost a hundred years ago :), there was issues booting from an FS that had canmount=off, which is why the canmount=noauto is for - it allows root FS' to boot if you so specify. As in, booting from a snapshot..
But don't quote me on this, it was a VEEEERYYYYY long time ago :D :D. And there's nothing saying that this is still the case (as in, booting from canmount=off), but.. ??
/usr and /var would just be container datasets
[EDIT: Configuring classic fstab mounting mused about here would work, yet see proposed solution for zfs auto-mounting, below.]
I'm getting the impression that no single zfs-options based mount config can ever work well for multiple boot environments (i.e. root FS systems) to coexist without conflicts, because each rootfs would really need to be able to have its own config for all its "coupled system state": https://docs.zfsbootmenu.org/en/v3.0.x/general/bootenvs-and-you.html
So, basicly independent boot environments should probably just use standard /etc/fstab configurations, i.e. mountpoint=legacy (https://superuser.com/a/1557822).
For example, mounting (above) separated dataset with /var (kept unchanged accross boot environments, not rolled back with them) may work for BSD (because there /var does not contain any system level "/var/lib" stuff as in linux distros). Similarly the separate /usr dataset with installed user software may work well for BSDs (within same release), because there is no such thing as a "usrmerge" to displace system level software into /usr.
However, when separate datasets are created for /var and /usr in say Debian (https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20Trixie%20Root%20on%20ZFS.html) this seems to go against keeping the "coupled system state" nicely together for boot alternatives and roll-backs. (At the very least, it seeems to require doing a lot of cat-and-mouse hunting for subdirs to separate into proper, boot-environment-independent datasets, if it's at all possible to do cleanly.)
We need to keep in mind WHY we have separate file systems for the operating system in the first place.
"In the old days", hard drives where extremely (!!) expensive and small. They were also prone to failure. This was LOOONG before RAID!! And even longer before Linux :). MANY decades before!! :)
So to maximise the chance to get the system back up, "the bare minimum" (/etc which contained some system binaries and /bin where most of the "boot binaries" were, later /sbin came in) system is needed - this is what we call the "root file system" was on one filesystem (also keep in mind that the root user's home directory was there as well, all mixed in with everything else! :), and the other was split up onto other drives.
So if /usr (which originally hosted the users - usr=user) - or any other FS except the root FS which was kept as small and unchanging (changes, i.e. writes, could create inconsistencies if the machine crashed while something was written to the disk/file system) crashed, you could boot in single user, do the repairs, reboot and all good..
That was the idea, and it had a lot more success than you'd think :). I've saved countless systems that way, before I could afford my first RAID card - the very earliest versions of Linux RAID (md) wasn't very good or stable :D :D. Although, in the early days, I only had one or two drives, so had to do partitioning - a file system was more likely to crash than the disk itself.
But with RAID, and especially something as .. "error correcting" and "fault tolerant" as ZFS, there simply is no need for separate file systems anymore! This is just a .. "left over" from those, dark and rainy days.. Were it actually hailed horizontally more than actually rained :D :D. I'm actually still amazed that that's the default option on most Linux install systems. It shouldn't be imo.
But your point of multi-boot is correct. I do remember making a design decision about that, because I couldn't figure out a way to solve that in a good way. BUT, that canmount=noauto does help in that, as long as you set root=... correctly, AND (!) you have one OS per FS (as in, not split up "like we use(d) to"). That even lets you boot from a snapshot, because the whole OS (including /var, /usr etc etc) is on there/that.
But it still don't quite work .. perfectly if you're booting from several different operating systems (I tried a few versions of *BSDs and Linux'es), but it really didn't work very well. THEN, I should say, might (?) work much better now.
This was my design decision, and I still think that was the right decision. If someone have something better or smarter, that won't break this, then by all means, PR and I'm sure someone will take a look in the next few years :).
We need to keep in mind WHY we have separate file systems for the operating system in the first place.
The still possible non-raid hardware (failures) seem only one aspect, but just as you say, the OS separation is still very much relevant for other reasons like multi-boot, configuration errors, startup errors, image-updates, roll-backs, ...
After thinking a bit more about canmount=noauto it seems fine to me, given the ZFS auto-mount behaviour.
The proposal in the orig. report above, comparing of dataset name with configured bootfs name, would not seem to help for non-bootfs datasets.
The thing that seems missing seems a way to associate non-bootfs filesystems to just specific boot environments.
Maybe set canmount=zpool/ROOT/DEBIAN? too restrict auto-mounting to bootfs below that path, e.g. zpool/ROOT/DEBIAN/default
Proposed solutions:
- https://github.com/openzfs/zfs/pull/17995
- https://github.com/openzfs/zfs/issues/17996
The second patch is never going to happen, sorry..
It would break existing system, cause inconsistencies between installations, and between versions.. That's not where that value should be.. You put that in your boot loaders root=... command line.
The second one, that is there to automount (even though it's set to noauto!) any and all filesystem below the root FS. This is necessary (required) for multi-boots.
In a previous comment, you showed the operating system filesystems on the same level (?) as the root fs:
pool/root
pool/var
pool/usr
However, that check is there to make sure that this works (which, IF (!) you need multiple FS' for this - which I've declared you don't - I think that is a/the bug!) - see line 335:
pool/root
pool/root/var
pool/root/usr
Now, it is not completely clear to my what the Debian GNU/Linux bug report actually mean. Do they mean that the first path example here, or the second, is what they have!?
If the former, then there really IS a bug there somewhere, but then either 1) the FS option is set wrongly (for some reason - bug in the installer or user/admin error!?), or 2) the installer created the FS' wrongly (should be the second example - but it's better not have multiple filesystems in the first place).
But either way, your first patch would stop the second way of doing it working. I would, if I had any voice in this (which I don't), vote no for both patches.
No matter how I think about this, this isn't a problem/bug in the installer. It's a feature! One that's there for a reason, made intentionally.
The bug is in the installer (or user/admin error). Imo.
Oh, and the noauto is there to stop the system mounting it/them automatically when not booting from that os/root.
There is only three values allowed: on, off and noauto, so there isn't much to play with here. Now, if you believe strongly enough about adding a fourth value to that option, then .. I have no idea how to do that, what people you need to convince, but you probably have to have EXTREMELY (!!) strong reasons :).
Oh, and the noauto is there to stop the system mounting it/them automatically when not booting from that os/root.
Exactly, AFAIU the noauto is used to prevent automounting of all the boot environments ('/' rootfs, often below pool/ROOT), and the property bootfs selects the one to mount and boot nevertheless (https://wiki.freebsd.org/BootEnvironments#Setting_Boot_Dataset)
I don't understand the breakage you're trying to explain, yet. My guess is you contrast the bsd layout I posted above (with boot environments under ROOT/, some top-level filesystems, and some below /var and /usr placeholders) vs. nesting filesystems within boot environments (as in this question: https://unix.stackexchange.com/questions/343656/zfs-on-linux-snapshot-recursively-volume-and-subvolumes). Are you saying these nested filesystems must also be set to noauto, but then get auto-mounted nevertheless?
I think that would already make a strong case to better clearly, and explicitly allow the automounting of these sub-mounts to just when booting their corresponding boot environment, i.e. by using the proposed [now improved] canmount=bootfs:mypool/ROOT/rootfs, instead [EDIT:] ~or~of relying on the ineffectiveness (or exception/not-working/brokeness) of the noauto setting.
It's also not very clear what you're saying about a line 355, in?: https://github.com/openzfs/zfs/blob/5f5e0e589bde543a25c827736495dd6f99fdf4b2/contrib/initramfs/scripts/zfs#L355
So there, there it checks the org.zol:mountpoint property (different from mountpoint), why for what?
The originally proposed canmount=mypool/ROOT/rootfs would (only) enable automounting (to the actual mountpoint that is configurable with the mountpoint property), if currently booting the boot environment mypool/ROOT/rootfs (or any decendant if it's just a placeholder).
So, maybe more readable?: canmount=bootfs:/ROOT/rootfs
Now, it is not completely clear to my what the Debian GNU/Linux bug report actually mean. Do they mean that the first path example here, or the second, is what they have!?
@FransUrbo I think the original bug report stated itself clearly: if there are some datasets marked with mount=noauto, it means the user does not want it to be automatically mounted on boot, for example, an encrypted dataset. However, any dataset for system rootfs also uses the property, so initramfs-scripts now ignores noauto at all. We might want some change on that. #17995 looks reasonable to me, if it does not break something implicitly.
https://github.com/openzfs/zfs/pull/17995 looks reasonable to me, if it does not break something implicitly.
Maybe it's just noauto, possibly misdefined, misused, broken or just not working for non-rootfs/bootfs filesystems, i.e. currently actually required to be ineffective or restricted for opaque reasons.
It would then still seem better to make noauto effective (fix it), and provide a clean alternative at least for non-bootfs filesystems instead, to clearly enable limited auto-mounting, thus:
- https://github.com/openzfs/zfs/issues/17996
But I think the canmount=noauto could stay backwards compatible (continue to be overridable by bootfs= for just the rootfs/bootfs). Yet, boot environments could migrate to generally use something like canmount=bootfs:/ROOT/default instead of canmount=noauto (for rootfs and non-rootfs), so they'd have auto-mounting clearly enabled (and properly limited to their own boots).
Are you saying these nested filesystems must also be set to noauto, but then get auto-mounted nevertheless?
Yes, because they're part of the operating system. They can't be mounted automatically, because they would then try to mount over the existing file systems, which would fail because they are (probably) used.
Let's say that you have
pool/root-linux
pool/root-linux/usr
pool/root-linux/var
pool/root-freebsd
pool/root-freebsd/usr
pool/root-freebsd/var
If all the pool/root-.../... do NOT have canmount=noauto, they would be mounted (as in, when zfs mount -a is called later in the boot process), and if/when you boot from the Linux root FS, you might end up with a FreeBSD /usr! Or vise versa, or any random (?) variant of that.
If the FS isn't in use, in which case the command/bootup will fail because it can't mount an FS on an FS in use.
Only way to get multi-boot to work, is to set them ALL to noauto, and let that logic deal with mounting everything below the root FS.
Setting it to off would do exactly what it say in the doc - don't mount it at all, and on would give exactly the behaviour I just mentioned. Only option is to have it set to noauto.
On the other hand, if you have:
pool/root-linux
pool/root-freebsd
pool/usr-linux
pool/usr-freebsd
pool/var-linux
pool/var-freebsd
There's absolutely no (safe programmatic) way to determine which file systems to mount where. Same problem as above. And it doesn't have to be that obvious. You can have:
pool/root-linux-debian
pool/root-linux-ubuntu
pool/usr-linux-debian
[etc]
Or even weirder naming schemes! That is why it is imperative that you have a nested FS structure, and canmount=noauto.
canmount=bootfs:mypool/ROOT/rootfs
That would (likely?) break any other OS that use ZFS, and it would cause a deviation from what/how ZFS works - it adds a value that won't exist anywhere else where ZFS runs. The canmount property can only have three values.
If you can get the whole ZFS community to agree to this value, then by all means. BUT it would still break older ZFS - mounting a pool on something older (say mounting a modern ZFS on Solaris or whatever, this must be possible!). So it is extremely unlikely that they would agree to that.
AND, I seriously doubt that the Linux ZFS community would agree to it, because then they would deviate from established ZFS behaviour.
instead [EDIT:] ~or~of relying on the ineffectiveness (or exception/not-working/brokeness) of the
noautosetting.
I don't agree that it's broken. >I< think the problem is a problematic/faulty/broken FS path layout..
It's also not very clear what you're saying about a line 355, in?:
Oh, sorry. I was looking at your patch..
https://github.com/openzfs/zfs/blob/5f5e0e589bde543a25c827736495dd6f99fdf4b2/contrib/initramfs/scripts/zfs#L335
The important part of that line is the "${fs}" part. It lists only file systems below (!) the one we want to mount. Not ALL file systems.
Now, if mount_fs() is called without parameters, then yes it will mount anything that isn't canmount=off. I need to go over that script in more detail and see if it was changed after I wrote it a hundred years ago and someone broke this behaviour..
#17995 looks reasonable to me, if it does not break something implicitly.
That's the problem, it will break the setup I mentioned in https://github.com/openzfs/zfs/issues/17963#issuecomment-3595977652
Looking at the original Debian GNU/Linux bug report again (third time now! :), I'd like to know more from the submitter:
- What does the file system layout look like? The test/duplication command included seems to indicate a flat structure.
- If it is indeed a flat structure, is it actually the initrd script that mounts it, or is it mounted later by the bootup process?
I've been looking over the initrd script, and I can't see anything wrong with it.
https://github.com/openzfs/zfs/blob/master/contrib/initramfs/scripts/zfs#L967-L978
At line 967, the pool (${ZFS_RPOOL}) is imported, it have found the boot FS (${ZFS_BOOTFS}) and it will there (967-977) mount the root FS and all file systems below it, and then any additional FS' specified (${ZFS_INITRD_ADDITIONAL_DATASETS}).
As far as I can see, it (the initrd script) works exactly as designed, so I'm unsure as to how (and where!) the original bug actually occur. I do not believe it is as easy as https://github.com/openzfs/zfs/blob/master/contrib/initramfs/scripts/zfs#L342 seem to indicate. That should work as intended!
Ok, I think I found it..
The problem is this commit: https://github.com/openzfs/zfs/blob/e865e7809e/contrib/initramfs/scripts/zfs#L61-L62
It removed all local variables - https://github.com/openzfs/zfs/commit/e865e7809e3c920d1d37e52978ea1175957cc4a0#diff-02baf31fc74cea480e9c7b32573633478653e2379888f84142b8887848a35517L60 - which means that when get_fs_value() is called from within the mount_fs() function (which have also had it's local removed!), it (they!) overwrites the ${fs} variable that mount_fs() was called with!
So when it goes through the loop in the main code (https://github.com/openzfs/zfs/blob/master/contrib/initramfs/scripts/zfs#L976-L978), the ${fs} variables is .. overwritten..
I think. It doesn't look correct anyway..
Actually, there's two loops where this is happening: https://github.com/openzfs/zfs/blob/master/contrib/initramfs/scripts/zfs#L969-L978
So the correct solution is to simply rename all the sub-functions ${fs} variable to something unique - all variables are now global!
Oh, and that goes for all variables in the whole script! I noticed that ${pool} is also used, where it can be overwritten by a sub-function. I'm sure there's more, but that is the solution: Understand that all variables are global!
To verify and make sure, set zfsdebug=yes on the bootloader command line and analyze the /var/log/boot.debug file, looking at the values of ${fs} each time.
The offending commit is five years old now, and I'm a bit amazed that it haven't been triggered until now :).
Thank you for your explanation and review, I think I could follow you this time, and it's helping a lot in getting things sorted out.
- So, I understand other ZFS implementations may not have that much of a need for multi-OS boot, than e.g. linux distributions. Just to scrutinize it a bit, since the diff for a
canmount=bootfsautomount allowance seems so small, I think it could support arbitrary layouts:
pool/root-linux-debian mountpoint=/ canmount=bootfs:pool/root-linux-debian (rootfs=pool/root-linux-debian mounted)
pool/root-linux-ubuntu mountpoint=/ canmount=bootfs:pool/root-linux-ubuntu (rootfs=pool/root-linux-ubuntu mounted)
pool/usr-linux-debian mountpoint=/usr canmount=bootfs:pool/root-linux-debian (rootfs=pool/root-linux-debian mounted)
And importantly, noauto could perfectly prevent auto-mounting of filesystems nested within a boot environment with this.
-
So, the default bsd install output I posted above, we can say is set up wongly. The mounts belonging to the OS should be nested below the boot environment, and are missing the
canmount=noautoproperty. And generally, snapshotting the bootenvironment must not be done recursively. As a user that was new to me (didn't see it) but somehow at least the BSD people must have missed that point, anybody has a pointer to where this "noauto-relaxation for the bootfs hierachy (all nested mounts)" is in the docs? So, I will have to move my default install around manually, to hunt for the intended behaviour. -
So, https://github.com/openzfs/zfs/pull/17995 is obsolete in its current form. Hopefully there a fix for the following will help to restore better
noautoeffectiveness. But it may still not work for nested filesystems or single top-level only installs (until having a proper, selectively applicable, path-limitingcanmount=allowance alternative fornoauto, e.g. https://github.com/openzfs/zfs/issues/17996). -
So, mechanical "checkbashisms removal" of
local... I'm having a hard time following that. Do you think you could list or PR the unique variable name changes you see missing?
Why are you trying to deliberately complicate something that's very simple!?
Until you have got approval to change the usage of canmount from the greater community, there is little point in insisting on this. I understand what you're trying to do, but it is extremely unlikely to be possible to do!
But even if it is, and you get approval for this, it's just several extra layers of complications. Just use a nested hierarchy as it is meant to be.
I will no longer entertain or discuss this part as even a remote possibility. IF (!!) you would get approval to do this from the greater community, then I will object to it as a valid solution, but I will not be able to stop you (or anyone else). It's a horrible solution, to a problem that does not exist..
It's fairly easy to move (zfs rename) existing, flat, file systems to a nested one, so there IS no problem imo.
As for the correct fix, this is the current code:
get_fs_value()
{
fs="$1" # `$fs` is a global variable!
value=$2
"${ZFS}" get -H -ovalue "$value" "$fs" 2> /dev/null
}
mount_fs()
{
fs="$1" # `$fs` is a global variable .. that is also used in `get_fs_value()` and `mountroot()`!
# Check that the filesystem exists
"${ZFS}" list -oname -tfilesystem -H "${fs}" > /dev/null 2>&1 || return 1
# Skip filesystems with canmount=off. The root fs should not have
# canmount=off, but ignore it for backwards compatibility just in case.
if [ "$fs" != "${ZFS_BOOTFS}" ]
then
canmount=$(get_fs_value "$fs" canmount)
[ "$canmount" = "off" ] && return 0
fi
[...]
}
mountroot()
{
[...]
# `$fs` is a global variable .. that is also used in `get_fs_value()` and `mount_fs()`!
for fs in $filesystems; do
IFS="$OLD_IFS" mount_fs "$fs"
done
IFS="$OLD_IFS"
for fs in $ZFS_INITRD_ADDITIONAL_DATASETS; do
mount_fs "$fs"
done
[...]
}
This means, that the main function (mountroot()) goes through the filesystems it wants to mount, as well as additional ones, using the ${fs} variable to keep track of them.
It then calls mount_fs() with this value (which also sets the ${fs} variable), and this function calls the get_fs_value() function (which also sets the ${fs} variable!!). In there somewhere, ${fs} becomes undefined (unset?) because it's the SAME (!) variable. It's a global variable, NOT a local one (as it used to be before that commit I mentioned).
So (at least) THREE different functions uses and modifies this one variable..
Solution is to make sure they're unique - ALL (!!) variables are global!!
So, example:
get_fs_value()
{
get_fs="$1" # This is still a global variable, but the name is unique and ONLY used in this function!
value=$2
"${ZFS}" get -H -ovalue "$value" "${get_fs{" 2> /dev/null
}
mount_fs()
{
mount_fs="$1" # This is still a global variable, but the name is unique and ONLY used in this function!
# Check that the filesystem exists
"${ZFS}" list -oname -tfilesystem -H "${mount_fs}" > /dev/null 2>&1 || return 1
# Skip filesystems with canmount=off. The root fs should not have
# canmount=off, but ignore it for backwards compatibility just in case.
if [ "${mount_fs}" != "${ZFS_BOOTFS}" ]
then
canmount=$(get_fs_value "${mount_fs}" canmount)
[ "$canmount" = "off" ] && return 0
fi
[...]
}
mountroot()
{
[...]
# This is still a global variable, but the name is unique and ONLY used in this function!
# Make sure to find any other reference to `$fs` in the code, and change similar to above!
for fs in $filesystems; do
IFS="$OLD_IFS" mount_fs "$fs"
done
IFS="$OLD_IFS"
for fs in $ZFS_INITRD_ADDITIONAL_DATASETS; do
mount_fs "$fs"
done
[...]
}
This way, each function have a completely separate variable for their parameter and information of what they're doing. BUT, it doesn't stop just with that variable! There's more like it..
PS. I would recommend wrapping variables in {} everywhere, to make this consistent. As in, instead of writing $fs, it should be ${fs} ($OLD_IFS => ${OLD_IFS} etc), and the same goes for ALL variables.
Why are you trying to deliberately complicate something that's very simple!?
Hm, hm, simple. Ok, thanks for the heads up.
I suspect the issue is with creating filesystems under the boot environment, within a filesystem that is not meant to be shared with other OSes, like the docker filesystems in ~the debian bug report~(no, that must have been somwhere else).
So, the noauto property is not going to work, so the installer, user or program needs further permissions and local layout knowledge to move or create filesystems elsewhere, instead of just in the default place and adding a property.
Honestly, I quite didn't expect stumbling over https://github.com/openzfs/zfs/issues/17703 either, so became wary of migrating. I've seen degraded errors happening with btrfs, but btrfs-restore exists and could read most, so I didn't even fuss with btrfs-rescue questions.