nullfsvfs
nullfsvfs copied to clipboard
memory use?
This is more of a question than a problem report.
I stumbled across this project when I remember my old Amiga's nul: (null:?) device, which also allowed to discard all file output from applications that would, say, create multiple files under a directory hierarchy based on a single input name pattern.
Except that to the best of my knowledge, those files all behaved like /dev/null in that they were always 0 bytes (IIRC you could list any random file "on" that nul: device and it would show as existing but empty).
I understand that this nullfs project maintains some amount of state - how much memory does/can that consume, and under what conditions is that memory released back to the system? Is there a mount option that disables the advanced features and makes the filesystem behave like /dev/null?
the kernel module will not store anything in memory until you decide to keep file data with the module option for certain files. then the size of the file is kept in buffer/cache memory. see:
https://github.com/abbbi/nullfsvfs?tab=readme-ov-file#keeping-file-data
the memory is freed as files marked with the write= option are deleted.
Otherwise the module should not use any great amounts of memory at all (other than internal kernel structures (inode information), and i have not done any testing on this on how much this will be depending on the amount of files. The main purpose of this module is to simulate a file system that behaves like /dev/null.
By default all file data is sent to the void and only the directory/file structure and file size is preserved, and this information is ultimately lost if the filesystem is umounted.
the kernel module will not store anything in memory ... vs. only the directory/file structure and file size is preserved, and this information is ultimately lost if the filesystem is umounted.
I'd guess that data is stored somewhere ;) (maybe released also when you "delete" the files in question?)
Personally I'd still add a mount option to not store anything, a bit out of completeness but also so there can be no doubts about this on (resource-limited) systems with long uptimes.
EDIT: "File size is preserved to work around applications that do size checks:" in the Readme suggests a bit that such applications aren't that frequent.
sorry, but i dont know what this is about.
by default the module will NOT STORE ANY DATA ANYWHERE Data is sent to the void just like /dev/null
the only DIFFERENCE is that the file system structure is kept. That is the main PURPOSE of this module. And to keep this information, you need to keep inode structure somewhere (kernel filesystem buffer/cache)
if you dont want filesystem structures, just pipe to /dev/null
i fail to see what this discussion is about, i guess you can push file system structures with millions of files to a directory mounted with this module and will not see any great memory usage.
On Wednesday May 21 2025 07:29:00 Michael Ablassmeier wrote:
have you actually used the module?
Not yet, because my use case would be one that does not keep any information (aka data, regardless of what kind) at all, and I'm first trying to get a handle on what is kept. Or how much effort it might require to add the option I mentioned myself.
by default only inode and dentry information is kept, means:
- directory and file names
- directory and file structure
- regular file and directory informations (size, name, change/create date, owner, group) etc,
like you would expect from a regular filesystem.
check your use case and use slabtop to check how much memory your kernel will use in dentry and lsm_inode_cache.
nullfs mounted, created ~ 8 million files with 1 byte "size".
find /nullfs/ | wc -l 8776856
slabtop for indoe and dentry cache shows:
9086592 9086592 100% 0.03K 70989 128 283956K lsm_inode_cache
9009924 9009924 100% 0.19K 214522 42 1716176K dentry
8817022 8817022 100% 0.62K 364204 25 5827264K inode_cache
after umount:
853632 727497 85% 0.03K 6669 128 26676K lsm_inode_cache
665658 649764 97% 0.19K 15849 42 126792K dentry
455750 455520 99% 0.62K 18230 25 291680K inode_cache
after:
echo 2 > /proc/sys/vm/drop_caches
425600 59621 14% 0.03K 3325 128 13300K lsm_inode_cache
Not yet, because my use case would be one that does not keep any information (aka data, regardless of what kind) at all,
if you do want to store even less, you will end up with a filesystem that does not behave like you would expect a filesystem to behave. I dont see how this makes sense.
On Wednesday May 21 2025 08:06:37 Michael Ablassmeier wrote:
if you do want to store even less, you will end up with a filesystem that does not behave like you would expect a filesystem to behave. I dont see how this makes sense.
As an equivalent of
> ln -s /dev/null /path1/to1/foo1.txt
> ln -s /dev/null /path2/to2/foo1.log
> someApp --output=/path1/to1/foo1.txt --log=/path2/to2/foo1.log
I suppose this would boil down to maintaining the same information in memory as is done currently, but discarding it as soon as the file is closed.
As far as I'm concerned a null filesystem that acts as a black hole could also always appear empty when you do an ls, du etc. In fact, that's how a black hole would behave ;)
As far as I'm concerned a null filesystem that acts as a black hole could also always appear empty when you do an
ls,duetc. In fact, that's how a black hole would behave ;)
yes, but this will break certain user space programs that do size checks after they have written data.
On Wednesday May 21 2025 10:50:03 Michael Ablassmeier wrote:
yes, but this will break certain user space programs that do size checks after they have written data.
Which is why I mentioned a mount option... (as well as possibly implementing it myself if not a complete rabbit hole)