devZer0

Results 144 comments of devZer0

interesting - crawling through 1mio cached empty dirs from ARC is an order of magnitude (about 25 times) slower then crawling trough 1mio files (all dirs and files are contained...

i didn't know that dbuf cache is being separate from arc and uncompressed. for me, directory entries are metadata. so are filenames, file acl's, timestamps etc.... the system is freshly...

i increased zfs_arc_meta_limit_percent and zfs_arc_dnode_limit_percent just to make sure, there is enough ram available for directory information

with zfs_arc_dnode_limit_percent removed from the boot options, it looks weird. how can dnode current cache size surpass the hard limit ? ``` ARC size (current): 11.6 % 2.2 GiB Target...

for my own interest, i set VMs ram down to 4gb, did warm up the arc again so that it contains 2,2gb of dir information , did a hibernate of...

>zpool is hung & iowait jumps, doesn't recover by itself. please describe your pool layout where the shrunken disk is part of. if there is no redundancy, there is nothing...

it's even worse - i have found that metadata even evicts if there is no memory pressure or other data troughput at all, i.e. just a simple rsync -av --dry-run...

it's frustrating to see that this problem exists for so long and has no priority for getting fixed. storing millions of files on zfs and using rsync or other tools...

>Last but not least: rsync is hungry for RAM. Like really really hungry when there are millions of files to sync. yes. btw, did you notice this one ? https://www.mail-archive.com/[email protected]/msg33362.html