zfs mounts used size not always correctly reported
I've notices that the reporting on zfs pools is not accurate. The used value for each is not correct as can be seen here:
|system/logs |zfs | |301Mi| 9% -----|3.1Gi| 3.4Gi|/var/log
|system/audit |zfs | |301Mi| 9% -----|3.1Gi| 3.4Gi|/var/log/audit
|system/home |zfs | |170Mi| 5% -----|3.1Gi| 3.3Gi|/home
zfs list show the used as:
NAME USED AVAIL REFER MOUNTPOINT
system/audit 25.2M 3.14G 25.2M /var/log/audit
system/home 170M 3.14G 170M /home
system/logs 301M 3.14G 301M /var/log
It seems to catch the 301M for /var/log and apply it to /var/log/audit too.
Thanks for the report.
Any element of investigation would be valuable before I find the time to try reproduce this.
@Canop Let me know if I can give you more information.
I don't know. I should 1) find time to dedicate to the question 2) familiarize myself with ZFS pools 3) reproduce the problem.
Unless you want to dive yourself into debugging lfs-core and see where the data are misinterpreted.
Sorry. I'd love to help but I don't know rust.
Could it be that the logic where the mount-point is read the value gets overwritten by previous values?
|system/logs |zfs | |301Mi| 9% -----|3.1Gi| 3.4Gi|/var/log
|system/audit |zfs | |301Mi| 9% -----|3.1Gi| 3.4Gi|/var/log/audit
only shared values that I see are that the one mount is ontop the other.
Here is the output from mount:
system/audit on /var/log/audit type zfs (rw,xattr,noacl)
system/logs on /var/log type zfs (rw,nodev,noexec,xattr,noacl)
I can't dive into this this week. I'll use your observations when work becomes a little more quiet.
I found another scenario where dysk is not reporting correctly on ZFS. My suspicion that the mount point sharing a portion of the same path, causing a problem is therefore wrong.
Also the used size differs.
Dysk 2.8.0 -u SI
│system/auditd │zfs │ │262K│ 0% │2.6G│2.6G│/var/log/audit │
│system/ossec-logs│zfs │ │262K│ 0% │2.6G│2.6G│/var/ossec/logs│
Dysk 2.8.0 -u binary
│system/auditd │zfs │ │256Ki│ 0% │2.4Gi│ 2.4Gi│/var/log/audit │
│system/ossec-logs│zfs │ │256Ki│ 0% │2.4Gi│ 2.4Gi│/var/ossec/logs│
zfs list
NAME USED AVAIL REFER MOUNTPOINT
system/auditd 248K 2.38G 248K /var/log/audit
system/ossec-logs 176K 2.38G 176K /var/ossec/logs
Could the crate libzetta be used to give better ZFS results? I'm suspecting btrfs will also give similar results to the current implementation.
https://docs.rs/libzetta/latest/libzetta/
Could pools not be printed like the following example:
┌───────────────────┬────┬────┬─────┬─────────┬─────┬──────┬───────────────┐
│ filesystem │type│disk│used │ use │free │ size │mount point │
├───────────────────┼────┼────┼─────┼─────────┼─────┼──────┼───────────────┤
│/dev/nvme0n1p1 │xfs │SSD │3.3Gi│41% ██ │4.7Gi│ 7.9Gi│/ │
│system │zfs │ │2.3Gi│54% ██▊ │1.4Gi│ 3.0Gi│*zpool* │
│- system/home │zfs │ │1.7Gi│54% ██▊ │1.4Gi│ 3.0Gi│/home │
│- system/usr-share │zfs │ │357Mi│20% █ │1.4Gi│ 3.0Gi│/usr/share │
│- system/logs │zfs │ │ 75Mi│ 5% ▎ │1.4Gi│ 3.0Gi│/var/log │
│- system/tmp │zfs │ │ 72Mi│ 5% ▎ │1.4Gi│ 3.0Gi│/tmp │
│- system/dnf-cache │zfs │ │ 51Mi│ 3% ▏ │1.4Gi│ 3.0Gi│/var/cache/dnf │
│- system/audit │zfs │ │ 12Mi│ 1% │1.4Gi│ 3.0Gi│/var/log/audit │
│- system/root │zfs │ │9.1Mi│ 1% │1.4Gi│ 3.0Gi│/root │
│- system/ossec-logs│zfs │ │384Ki│ 0% │1.4Gi│ 3.0Gi│/var/ossec/logs│
│/dev/nvme0n1p128 │vfat│SSD │1.4Mi│14% ▋ │8.6Mi│10.0Mi│/boot/efi │
└───────────────────┴────┴────┴─────┴─────────┴─────┴──────┴───────────────┘
where:
→ zpool list
NAME SIZE ALLOC FREE
system 3.75G 2.24G 1.51G
and
→ zfs list
NAME USED AVAIL REFER MOUNTPOINT
system 2.24G 1.39G 96K /system
system/audit 12.3M 1.39G 12.3M /var/log/audit
system/dnf-cache 51.4M 1.39G 51.4M /var/cache/dnf/
system/home 1.66G 1.39G 1.65G /home
system/logs 74.6M 1.39G 74.6M /var/log
system/ossec-logs 280K 1.39G 280K /var/ossec/logs
system/root 11.0M 1.39G 9.10M /root
system/tmp 71.8M 1.39G 71.8M /tmp
system/usr-share 357M 1.39G 357M /usr/share/
provides the correct values?
An example of where dysk got it wrong:
┌────────────────────────────────┬────┬────┬─────┬─────────┬─────┬──────┬──────────────────────────┐
│ filesystem │type│disk│used │ use │free │ size │mount point │
├────────────────────────────────┼────┼────┼─────┼─────────┼─────┼──────┼──────────────────────────┤
│wazuh/var-lib-wazuh-indexer │zfs │ │189Gi│91% ████▌│ 18Gi│ 207Gi│/var/lib/wazuh-indexer │
│wazuh/var-ossec │zfs │ │ 99Gi│85% ████▎│ 18Gi│ 117Gi│/var/ossec │
│wazuh/usr-share-wazuh-index │zfs │ │745Mi│ 4% ▎ │ 18Gi│ 19Gi│/usr/share/wazuh-indexer │
│wazuh/usr-share-wazuh-dash │zfs │ │670Mi│ 3% ▏ │ 18Gi│ 19Gi│/usr/share/wazuh-dashboard│
The total size should be the same for the 4 mounts. The free space is correct.