mergerfs
mergerfs copied to clipboard
MergerFS reported no space left on device, but 52% remains free
Describe the bug
Was copying content across LAN using FDT. It'd been copying for round 12h when FTD reported out of disk space.
Looking at the underlying drives it looks like mergerfs filled /alib/ext4a and then reported out of disk space.
MergerFS shows the pool as still having 52% free Initiate a copy operation that fills a volume in the mergerfs pool. Expected behavior
Copying should continue onto next volume in mergerfs pool
System information:
OS, kernel version: Linux alib 5.18.15-arch1-1 #1 SMP PREEMPT_DYNAMIC Fri, 29 Jul 2022 22:52:39 +0000 x86_64 GNU/Linux- mergerfs version:
2.33.5 - mergerfs settings
## Drives currently in alib
UUID=0774f5ac-f89c-456c-8bf8-c9f6ec20fc14 /alib/ext4a ext4 defaults,noatime 0 0
UUID=4405943b-f674-4f64-ae33-aa91245b9a44 /alib/ext4b ext4 defaults,noatime 0 0
UUID=39da7344-05b8-44d5-9b66-62e69f7ae916 /alib/ext4c ext4 defaults,noatime 0 0
## QNAP JBOD drives
UUID=3dc98605-3c94-46b5-afd0-d2f613594a95 /jbod/d1 ext4 defaults,noatime 0 0
UUID=8134f536-3fe4-48ea-9f74-a9e142192242 /jbod/d2 ext4 defaults,noatime 0 0
UUID=43b4bd7f-851e-41bf-80a8-a7a1ff680324 /jbod/d3 ext4 defaults,noatime 0 0
UUID=dcbf83d5-2d5e-4a9a-a819-6891d7e853a9 /jbod/d4 ext4 defaults,noatime 0 0
UUID=18c31471-265b-4b96-88c8-a51b143a18dd /jbod/d5 ext4 defaults,noatime 0 0
UUID=8a3913c4-f28a-4575-ae5e-7c273651607f /jbod/d6 ext4 defaults,noatime 0 0
UUID=0c254e4b-5254-46e3-aa3e-f8fec9b78104 /jbod/d7 ext4 defaults,noatime 0 0
UUID=56e37144-7411-492f-8a6b-18bd2831bd0c /jbod/d8 ext4 defaults,noatime 0 0
## mount mergerfs /alib/roonpool
/alib/ext4a:/alib/ext4b:/alib/ext4c:/jbod/d1:/jbod/d2:/jbod/d3:/jbod/d4:/jbod/d6:/jbod/d7:/jbod/d8 /jbod/pool fuse.mergerfs defaults,allow_other,use_ino,func.getattr=newest,fsname=mergerFS 0 0
- List of drives, filesystems, & sizes:
df -h
Filesystem Size Used Avail Use% Mounted on
dev 7.7G 0 7.7G 0% /dev
run 7.7G 9.2M 7.7G 1% /run
/dev/nvme0n1p2 234G 92G 131G 42% /
tmpfs 7.7G 0 7.7G 0% /dev/shm
tmpfs 7.7G 0 7.7G 0% /tmp
tmpfs 7.7G 24K 7.7G 1% /var/log
tmpfs 7.7G 0 7.7G 0% /var/tmp
mergerFS 36T 17T 19T 48% /jbod/pool
/dev/nvme0n1p1 511M 60M 452M 12% /boot
/dev/sdd1 2.7T 28K 2.7T 1% /jbod/d1
/dev/sde1 2.7T 28K 2.7T 1% /jbod/d2
/dev/sdf1 2.7T 28K 2.7T 1% /jbod/d3
/dev/sdj1 2.7T 8.0K 2.7T 1% /jbod/d7
/dev/sdi1 2.7T 1.3T 1.3T 51% /jbod/d6
/dev/sdc1 5.5T 5.5T 4.0G 100% /alib/ext4c
/dev/sdk1 2.7T 28K 2.7T 1% /jbod/d8
/dev/sdg1 2.7T 8.0K 2.6T 1% /jbod/d4
/dev/sda1 5.5T 5.5T 13G 100% /alib/ext4a
/dev/sdb1 5.5T 4.3T 1.2T 79% /alib/ext4b
/dev/sdh1 9.1T 1.6T 7.0T 19% /jbod/d5
tmpfs 1.6G 0 1.6G 0% /run/user/1000
lsblk -f
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS
sda
└─sda1 ext4 1.0 0774f5ac-f89c-456c-8bf8-c9f6ec20fc14 12.4G 100% /srv/nfs/ext4a
/alib/ext4a
sdb
└─sdb1 ext4 1.0 4405943b-f674-4f64-ae33-aa91245b9a44 1.1T 79% /srv/nfs/ext4b
/alib/ext4b
sdc
└─sdc1 ext4 1.0 39da7344-05b8-44d5-9b66-62e69f7ae916 4G 100% /srv/nfs/ext4c
/alib/ext4c
sdd btrfs bfd6fa6d-b19c-4a14-bb60-95e837a8caa4
└─sdd1 ext4 1.0 3dc98605-3c94-46b5-afd0-d2f613594a95 2.7T 0% /srv/nfs/jbod1
/jbod/d1
sde btrfs 5de1a754-2362-49ab-a056-dd946db4df9f
└─sde1 ext4 1.0 8134f536-3fe4-48ea-9f74-a9e142192242 2.7T 0% /srv/nfs/jbod2
/jbod/d2
sdf btrfs 09d5fd6e-a67c-43ff-819f-8b278cae3afc
└─sdf1 ext4 1.0 43b4bd7f-851e-41bf-80a8-a7a1ff680324 2.7T 0% /srv/nfs/jbod3
/jbod/d3
sdg
└─sdg1 ext4 1.0 sdc dcbf83d5-2d5e-4a9a-a819-6891d7e853a9 2.5T 0% /srv/nfs/jbod4
/jbod/d4
sdh
└─sdh1 ext4 1.0 18c31471-265b-4b96-88c8-a51b143a18dd 7T 18% /srv/nfs/jbod5
/jbod/d5
sdi
└─sdi1 ext4 1.0 8a3913c4-f28a-4575-ae5e-7c273651607f 1.3T 48% /srv/nfs/jbod6
/jbod/d6
sdj
└─sdj1 ext4 1.0 0c254e4b-5254-46e3-aa3e-f8fec9b78104 2.7T 0% /srv/nfs/jbod7
/jbod/d7
sdk btrfs df099b71-c0a9-4d4c-be0b-a0708c1c5317
└─sdk1 ext4 1.0 56e37144-7411-492f-8a6b-18bd2831bd0c 2.7T 0% /srv/nfs/jbod8
/jbod/d8
nvme0n1
├─nvme0n1p1 vfat FAT32 DD4D-2D9A 451.4M 12% /boot
└─nvme0n1p2 ext4 1.0 52422a55-aa99-4a80-b631-a4c009fc2b8a 130.2G 39% /
- A strace of the application having a problem:
strace -fvTtt -s 256 -o /tmp/app.strace.txt <cmd>strace -fvTtt -s 256 -o /tmp/app.strace.txt -p <appPID>
- strace of mergerfs while app tried to do it's thing:
strace -fvTtt -s 256 -p <mergerfsPID> -o /tmp/mergerfs.strace.txt
Will need to attempt to repoduce and run straces
Additional context
Add any other context about the problem here.
defaults,allow_other,use_ino,func.getattr=newest,fsname=mergerFS
You are using the default policy epmfs and default minfreespace of 4G. mergerfs doesn't aggregate storage in a linear fashion. It aggregates access to existing filesystems. The policy you have very explicitly limits access to branches which already have the relative path. It's totally valid and expected to return the system is out of space when you've expressly told it to filter out all branches which don't have the relative path and have less than the default 4G of free space. If that's not the behavior you want you'll need to change the settings.
- https://github.com/trapexit/mergerfs#functions-categories-and-policies
- https://github.com/trapexit/mergerfs#what-policies-should-i-use
- https://github.com/trapexit/mergerfs#why-are-all-my-files-ending-up-on-1-drive
Thanks for that. It's been years since I've reviewed the documentation and this is the first time I've tried to treat all drives as a storage pool - I should have checked docs before proceeding. Will do and revert if questions.
So given I've copied a large quantity of data across already, it would seem the simplest thing to do now would be to create the relative path on all of the drives that are mounted as part of the mergerfs pool?
I'd like to keep directories together as far as possible, so by doing the above am I correct in surmising mergerfs will fill the first drive with relative path then move on to the next drive with the same relative path and add content to it, then the next etc?
The policies work exactly as described. If you put the relative path on all the drives it will just act like mfs. If you want to kinda keep stuff together without manual intervention then use the policies that do that. Like mspmfs.