zfs
zfs copied to clipboard
cannot mount user's 'HomeDirectory' on ZFS dataset at boot-time : Insufficient privileges
Hi all !
Since Security update 2020-02 (?) to my 10.14.6 Mojave, all my MacOS user accounts with a HomeDirectory defined on a ZFS FileSystem can NOT mount at boot-time.
Short story : Having defined
- an OSX compatible ZFS FS mounted on /Volumes/zDisk/Home/bob
HomeDirectory=/Volumes/zDisk/Home/bobeither through Système.Prefs->Users&Groups ordscl(pick one) for user bob,
after reboot, ZFS can NOT mount this ZFS FileSystem 👎
Note : All other ZFS mounts are OK
Clue in file /var/log/org.openzfsonosx.zpool-import-all.err
cannot mount 'zDisk/Home/bob': Insufficient privileges
sudo zfs mount zDisk/Home/bobis OK !! with correct ACLs (owner bob).
Long Story
MacPro 5,1 (2010) with SSD (APFS - macOS 10.14.6 (18G4032)/ Darwin 18.7.0) and 3 internal SATA drives.
sysctl {spl,zfs}.kext_version
spl.kext_version: 1.9.4-0 zfs.kext_version: 1.9.4-0
Most User accounts were stored on the internal 3xSATA Pool with ZFS FileSystem.
- User accounts stored on a ZFS FileSystem.
- Since MacOS 10.14 ;
- Softlink can't be used anymore, typical.
ls -s /Volumes/zDisk/Home/myser /Users/myuser - param
HomeDirectoryhas to be used declared (one of:- through SysTPref->User&Groups
- through
dscl localhost -read /Local/Default/Users/myuser NFSHomeDirectory(then use-change)
- Softlink can't be used anymore, typical.
Tests
Fidling with some of the ZFS parameters didn't solve the problem
com.apple.devdisk:ònorpoolonlyxattr=on(in first place) /sa(no better)- Giving Full Disk Access in GateKeeper "Security & Privacy" Prefs :
/usr/local/bin/zfsas suggested by JMoVS in issue #752/usr/local/bin/zpoolas it is called at boot-time in/Library/LaunchDaemons/org.openzfsonosx.zpool-import-all.plist/bin/bashas suggested in OpenZFS-on-OSX FAQcom.apple.browseonoroff- rebuild kext cache
- put the ZFS mountpoint behind a regular folder on parent ZFS dataset with ACLs/owner to the user
chmod o+wxon parent ZFS dataset zDisk/Home
Common ZFS params
com.apple.ignoreowner=offcom.apple.mimic_hfs=onaclmode=passthroughaclinherit=restrictedutf8only=onnormalization=formDcasesensitivity=insensitive
Some readings on similar cases
Big thanks for the follow-up :-)
William
Note :
zpool import ...started by launchDaemon in /Library/LaunchDaemons//org.openzfsonosx.zpool-import-all.plist
LOG file /var/log/org.openzfsonosx.zpool-import-all.err
(...) cannot mount 'zDisk/Home/bob': Insufficient privileges (...)
LOG file /var/log/org.openzfsonosx.zpool-import-all.log
+zpool-import-all.sh Tue Apr 7 08:29:47 CEST 2020 2020-04-07 08:29:59.489 system_profiler[390:2544] SPUSBDevice: IOCreatePlugInInterfaceForService failed 0xe00002be 11.82 real 0.70 user 0.97 sys Waiting up to 60 seconds for the InvariantDisks idle file /var/run/disk/invariant.idle to exist Found /var/run/disk/invariant.idle after 0 iterations of sleeping 0.1 seconds Tue Apr 7 08:29:59 CEST 2020 Running zpool import -a Tue Apr 7 08:30:09 CEST 2020 Destroy and re-create the pool from a backup source. Tue Apr 7 08:30:33 CEST 2020 Finished running zpool import -a : 1 Touching the file /var/run/org.openzfsonosx.zpool-import-all.didRun Tue Apr 7 08:30:33 CEST 2020 -zpool-import-all.sh
Hmm could we change the import script to run
dtruss zpool import .... and see which actual syscall returns the error...
Hi Jorgen,
-
dtruss "${ZPOOL}" import -a -d /var/run/disk/by-idin/usr/local/libexec/zfs/launchd.d/zpool-import-all.sh. -
Reboot.
-
LOG file
/var/log/org.openzfsonosx.zpool-import-all.err
dtrace: failed to execute /usr/local/bin/zpool: (os/kern) failure
Seems ZFS zpool import did not the job
`$ zpool list'
no pools available
You might need to disable SIP for dtrace - for it to be able to run
Ach !!! For now can't access to the recovery mode after reboot, because my graphic ATI PC card has not the Apple ROM, and my old legacy Nvidia GT120 will not be supported by Mojave ... i think. (will make a try ?)
Will have to burn ROM of ATI card first , sorry.
Ah yes, that is awkward. Hmm what can we do without SIP changes hmmmmm
I'm motivated !! I managed to put my legacy GT120 graphic card and disabled SIP. Now it's rebooting (still on the grey apple-icon screen since 10 min) with dtruss in place. I hear the HD working... (MacOS and logs are on a SSD). Hope it will not fill the SSD!
My MacPro is still endlessly on the grey apple-icon screen.
I did another similar test on my MacBook Air (same software versions). I did created a pool on a file, and same ZFS datasets and user creation.
So I can confirm the same issue : After first Mail.app configuration on first user login, downloading the emails from the server, everting seems to be OK till I closed Mail.app From that moment, launching again Mail.app will never succeed again.
Next, I will disable SIP, retry, and then apply dtruss.
I got this in /var/log/org.openzfsonosx.zpool-import-all.err
(...) = 0 0 openat(0xFFFFFFFFFFFFFFFE, "/Volumes/zDisk/Home/maryse\0", 0x1000004, 0x0) = 5 0 open_nocancel("/Volumes/zDisk/Home/maryse\0", 0x1100004, 0x0) = 5 0 (...) = 0 0 open_nocancel("/Volumes/zDisk/Home/maryse/.VolumeIcon.icns\0", 0x0, 0x1B6) = 6 0 (...) = 0 0
5 is just EIO, but that might be translated to a different error elsewhere. If we are lucky maybe there is some messages in vnop_debug, which you can set to 1 with sysctl kstat.mumble.vnop_debug - its in the wiki (sorry, in a zoom meeting). Then after its done, you can use the "log show" command with "-starttime" to dump the logs it printed during boot.
Hi Jurgen,
sorry, no kstat.mumble.vnop_debug on my system....
I found kstat.zfs.darwin.tunable.vnop_debug
I'll do a run with this one.
Here is what I got in system.log (within the first 60 seconds after BOOT_TIME):
localhost bootlog[0]: BOOT_TIME 1586331067 0 (...) kernel: (zfs) ZFS: Loading module ... kernel: (zfs) ZFS: waiting for SPL init kernel: (zfs) ZFS: hostid set to efc95fc9 from UUID '1F0AF306-3E91-507F-B9DC-84508991197B' kernel: (zfs) ZFS: Loaded module v1.9.4-0, ZFS pool version 5000, ZFS filesystem version 5 kernel: (zfs) ZFS filesystem version: 5 Responsible: /usr/local/libexec/zfs/launchd.d/zpool-import-all.sh [149] MetaData: {"file-flags":0,"responsible-process-path":"/usr/local/libexec/zfs/launchd.d/zpool-import-all.sh","hardware":"Mac","platform-binary":false,"responsible-process-pid":149,"operation":"file-mount","platform-policy":true,"signing-id":"zpool","team-id":"735AM5QEU3","vnode-type":"DIRECTORY","primary-filter":"path","path":"/Volumes/zDisk/Users/sterna-marine","profile-flags":0,"rdev":4294967168,"target":"/Volumes/zDisk/Users/sterna-marine","pid":396,"errno":1,"build":"Mac OS X 10.14.6 (18G4032)","summary":"deny(1) file-mount /Volumes/zDisk/Users/sterna-marine","process":"zpool","flags":69,"uid":0,"platform_binary":"no","primary-filter-value":"/Volumes/zDisk/Users/sterna-marine","process-path":"/usr/local/bin/zpool","profile":"platform","action":"deny","normalized_target":["Volumes","zDisk","Users","sterna-marine"]} 1 libzfs.2.dylib 0x0000000102c8c5c8 zfs_mount + 828 2 libzfs.2.dylib 0x0000000102c8dbda zpool_enable_datasets + 236 0x102c79000 - 0x102cb0ff7 libzfs.2.dylib (0)
/usr/local/lib/libzfs.2.dylib logger: error: data-notify.sh: eid=15: failed to lock "/var/run/zed.zedlet.state.lock": /etc/zfs/zed.d/zed-functions.sh: line 128: /opt/homebrew/bin/flock: No such file or directory logger: error: data-notify.sh: eid=15: failed to unlock "/var/run/zed.zedlet.state.lock": /etc/zfs/zed.d/zed-functions.sh: line 166: /opt/homebrew/bin/flock: No such file or directory
Ok sorry, had a zoom meeting, so didn't have time to finish.
Make the import script have
sysctl kstat.zfs.darwin.tunable.vnop_debug=1
zpool import ........
Then take note of the time you booted, let's say you started the machine at 09:06:23, after boot run
log show --start "2020-04-09 09:06:00" --predicate 'sender == "zfs" OR sender == "spl"' --style compact
Which dumps any logs printed by zfs/spl starting from 09:06:00 - change the date accordingly.
Although, isn't you log pretty much saying it's failed?
"target":"/Volumes/zDisk/Users/sterna-marine",
"pid":396,
"errno":1,
"deny(1) file-mount /Volumes/zDisk/Users/sterna-marine",
"process":"zpool",
"flags":69,
"uid":0,
"action":"deny","normalized_target":["Volumes","zDisk","Users","sterna-marine"]}
1 libzfs.2.dylib 0x0000000102c8c5c8 zfs_mount + 828
2 libzfs.2.dylib 0x0000000102c8dbda zpool_enable_datasets + 236
sorry, there is some noise about
file-mount /Volumes/zDisk/Users/sterna-marine
it is now corrected (existing directly in place of the mount point, Huhh !!
after correcting, and reboot, :
system.logsays about ifs
2020-04-08 10:06:24.796378+0200 0x49f5 Activity 0x7bc0 1131 0 zfs: (libsystem_info.dylib) Retrieve User by ID 2020-04-08 10:06:48.278042+0200 0x4a49 Activity 0x7c10 1134 0 zfs: (libsystem_info.dylib) Retrieve User by ID 2020-04-08 10:08:32.261992+0200 0x4c32 Activity 0x7f40 1152 0 zfs: (libsystem_info.dylib) Retrieve User by ID 2020-04-08 10:08:32.558756+0200 0x4bdf Default 0x0 0 0 kernel: (zfs) zfs_vnop_ioctl HFS_GET_BOOT_INFO 2020-04-08 10:09:36.194616+0200 0x188 Default 0x0 0 0 kernel: (zfs) ZFS: Loading module ... 2020-04-08 10:09:36.194644+0200 0x18f Default 0x0 0 0 kernel: (zfs) ZFS: waiting for SPL init 2020-04-08 10:09:36.194656+0200 0x188 Default 0x0 0 0 kernel: (zfs) ZFS: hostid set to efc95fc9 from UUID '1F0AF306-3E91-507F-B9DC-84508991197B' 2020-04-08 10:09:38.285720+0200 0x18f Default 0x0 0 0 kernel: (zfs) ZFS: Loaded module v1.9.4-0, ZFS pool version 5000, ZFS filesystem version 5 2020-04-08 10:09:38.285723+0200 0x18f Default 0x0 0 0 kernel: (zfs) ZFS filesystem version: 5 2020-04-08 10:10:44.208008+0200 0x1628 Default 0x0 492 0 logger: error: data-notify.sh: eid=15: failed to lock "/var/run/zed.zedlet.state.lock": /etc/zfs/zed.d/zed-functions.sh: line 128: /opt/homebrew/bin/flock: No such file or directory 2020-04-08 10:10:44.271948+0200 0x163f Default 0x0 505 0 logger: error: data-notify.sh: eid=15: failed to unlock "/var/run/zed.zedlet.state.lock": /etc/zfs/zed.d/zed-functions.sh: line 166: /opt/homebrew/bin/flock: No such file or directory 2020-04-08 10:12:32.549675+0200 0x219e Activity 0x45b0 808 0 zfs: (libsystem_info.dylib) Retrieve User by ID
... and about zpool
HERE are the actual issues, aren't they ?
020-04-08 10:10:19.206467+0200 0xa58 Error 0x0 0 0 kernel: (Sandbox) Sandbox: zpool(399) System Policy: deny(1) file-read-data /dev/disk4 2020-04-08 10:10:19.206941+0200 0xa56 Error 0x0 0 0 kernel: (Sandbox) Sandbox: zpool(399) System Policy: deny(1) file-read-data /dev/disk4s3 2020-04-08 10:10:19.207528+0200 0xa5e Default 0x0 0 0 kernel: (Sandbox) Sandbox: 1 duplicate report for zpool deny(1) file-read-data /dev/disk4s3 2020-04-08 10:10:19.207532+0200 0xa5e Error 0x0 0 0 kernel: (Sandbox) Sandbox: zpool(399) System Policy: deny(1) file-read-data /dev/disk2s2 2020-04-08 10:10:19.207646+0200 0xa60 Error 0x0 0 0 kernel: (Sandbox) Sandbox: zpool(399) System Policy: deny(1) file-read-data /dev/disk4s4 2020-04-08 10:10:19.207763+0200 0xa66 Error 0x0 0 0 kernel: (Sandbox) Sandbox: zpool(399) System Policy: deny(1) file-read-data /dev/disk4s2 2020-04-08 10:10:19.208348+0200 0xa62 Error 0x0 0 0 kernel: (Sandbox) Sandbox: zpool(399) System Policy: deny(1) file-read-data /dev/disk4s1 2020-04-08 10:10:19.208466+0200 0xa61 Error 0x0 0 0 kernel: (Sandbox) Sandbox: zpool(399) System Policy: deny(1) file-read-data /dev/disk4s4 2020-04-08 10:10:19.208583+0200 0xa64 Error 0x0 0 0 kernel: (Sandbox) Sandbox: zpool(399) System Policy: deny(1) file-read-data /dev/disk4s1 2020-04-08 10:10:19.208935+0200 0xa53 Error 0x0 0 0 kernel: (Sandbox) Sandbox: zpool(399) System Policy: deny(1) file-read-data /dev/disk4s2 2020-04-08 10:10:40.197069+0200 0xa4e Error 0x0 0 0 kernel: (Sandbox) Sandbox: zpool(399) System Policy: deny(1) file-mount /Volumes/zDisk/Users/maryse 2020-04-08 10:10:40.199713+0200 0xa4e Error 0x0 0 0 kernel: (Sandbox) Sandbox: zpool(399) System Policy: deny(1) file-mount /Volumes/zDisk/Users/rover-yachts 2020-04-08 10:10:40.235153+0200 0xa4e Error 0x0 0 0 kernel: (Sandbox) Sandbox: zpool(399) System Policy: deny(1) file-mount /Volumes/zDisk/Users/rover-yachts.1 2020-04-08 10:10:40.986675+0200 0xa4e Error 0x0 0 0 kernel: (Sandbox) Sandbox: zpool(399) System Policy: deny(1) file-mount /Volumes/zDisk/Users/rover-yachts.5 2020-04-08 10:10:40.988546+0200 0xa4e Error 0x0 0 0 kernel: (Sandbox) Sandbox: zpool(399) System Policy: deny(1) file-mount /Volumes/zDisk/Users/rover-yachts.6 2020-04-08 10:10:41.288680+0200 0xa4e Error 0x0 0 0 kernel: (Sandbox) Sandbox: zpool(399) System Policy: deny(1) file-mount /Volumes/zDisk/Users/sterna-marine 2020-04-08 10:10:41.289094+0200 0xa4e Error 0x0 0 0 kernel: (Sandbox) Sandbox: zpool(399) System Policy: deny(1) file-mount /Volumes/zDisk/Users/william.io
Note : Disk 4 is about the boot HD of the system :
/dev/disk4 (synthesized): #: TYPE NAME SIZE IDENTIFIER 0: APFS Container Scheme - +320.1 GB disk4 Physical Store disk2s2 1: APFS Volume MacOS 292.7 GB disk4s1 2: APFS Volume Preboot 22.5 MB disk4s2 3: APFS Volume Recovery 507.5 MB disk4s3 4: APFS Volume VM 24.6 KB disk4s4
Why is zpool trying to access system volumes ?
Note : zpool is still declared in Système.Prefs-> GateKeeper as Full Disk Access
Certainly sounds like it is denying zpool from mount. It needs to read from disk4 boot, to find the directory to mount ontop of
Aaaahhh, Trying to access '/' in first place, then ' /Volumes` ?
SIP is enabled for now,
zpool is still declared in Système.Prefs-> GateKeeper as Full Disk Access
I don't get why its refused at boot, and works after login. "id" presumably says your login name after all
I wonder if Launchd of zpool-import-all.plist definition would benefit from
<key>KeepAlive</key> <dict> <key>PathState</key> <dict> <key>/var/run/org.openzfsonosx.zpool-import-all.didRun</key> <true/> </dict> </dict>
and why not:
<key>StartOnMount</key> <true/>
import script will be fired on every disk mount, even USB (or network I guess.)
Some more testing
with SIP disabled, ZFS mounts of User's HomeDirectory is OK.
My MacPro has a password on user 'root'. (I had forgotton this ... and pointed it out recently)
So I'll have to get rid of this and redo test with SIP disbales.
Curious about further developments here.
Also using MacOS 10.14.6 Mojave build 18G103.
I can confirm this SIP behavior with the HomeDirectory in its default location at /Users/
Likewise, having jumped through all the hoops you did, (granting full disk access, etc), the final nail in the coffin which made it work was disabling SIP.
This is frustrating and confusing because there are no flags on /Users/, let alone the restricted flag. I would guess that there's some behavior about SIP which explicitly reads the value of HomeDirectory, but that's just speculation.
This stackoverflow comment may be enlightening. It seems that FDA is a per-user setting, especially as concerning TCC, Transparency Consent and Control. Which is likely the reason why specifically mounting the HomeDirectory is failing. Subject adjacent worth looking into is how launchd and Launch Daemons deal with root and how they deal with TCC.
I believe this is symptomatic of a larger issue: MacOS doesn't like it when you mount anything on a user's Home Directory, because of TCC. This isn't just a ZFS problem, I think, this is a MacOS 10.14+ issue. However, MacOS and TCC doesn't care if the Home Directory structure is a symbolic link. It doesn't follow that link to enforce those rules.
I think a recommended solution here would be to either let it mount on /Volumes/$your-pool/$your-user-dir or some other custom location, and symlink /Users/ or whatever your Home Directory is to that mounting point.
If you share this partition with linux, like I do, then setting a mountpoint of /home/, etc, will not be argued by MacOS, and from there you can chflags -h hidden $mountdir on the mountpoint to hide it. There is precedent for this as is, since /etc/ and /var/ are really at /private/etc/ and /private/var, but the /private directory has the hidden flag.