Haravikk
Haravikk
> It does not look impossibly difficult for writing and reading, but becomes much more problematic on delete -- you'd need some reference counter, that should be possible to modify...
> No. recordsize is the fixed logical record size for all blocks of a file, sans the last one (which can be partly filled). …meaning it's a **maximum** record size,...
> Your idea is to save on-disk space for storing loads of small files by squashing data from multiple, unrelated files together, in the unfounded hope that a bigger compression...
> It's just that recordsize and volblocksize define the logical block sizes ZFS compression works on **Which is exactly what this proposal is about**; `volblocksize` sets an effective minimum amount...
> please add some logic to atleast avoid hang of zpool/zfs command It's currently being worked on, you can track the progress on issue #11082. I believe the actual code...
Thanks so much for all the work you've put into tracking this down; I've also linked in the other thread you've found which does sound like the same issue. Ranvel...
As per Arne's post to the forum topic, another possible workaround is to disable compressed ARC like so (set in `/etc/zfs/zsysctl.conf` to persist): ``` sysctl -w kstat.zfs.darwin.tunable.zfs.compressed_arc_enabled=0 ``` This _may_...
Unfortunately while disabling compressed ARC did improve performance overall, it didn't solve this problem – while the system was much more responsive with less entries in ARC and relatively low...
I'm another seeing this problem with DoT (Cloudflare), hopefully switching to DoH solves it for me for now.
That still seems like a weird way to handle it though; while the scan stage is of course important, only the issuing speed has an impact on completion time as...