Alexander Motin
Alexander Motin
> Thus space efficiency is not quite related to disks receiving 16 kB or more. It is related. The more data each disk get, the less will matter the last...
But about 7% of space usage I would not care enough to go patching the code or fine-tuning settings. Especially since any compression will eliminate any possible remnants of sense.
On a quick look this looks like a pretty narrow solution. Please see this discussion: https://github.com/openzfs/zfs/pull/17094#issuecomment-3374742888.
According to `indirect-X` vdevs reported, at some point you've used vdev removal feature. But it seems somehow the mapping table from those removals got corrupted, and the code does not...
> Would the inverse be possible? @BoBeR182 No. RAIDZ shrinking (stripe width reduction) is impossible even theoretically, since some disks would get two blocks from one row, loss of which...
> Currently, any queuing would be left to the device itself to do. That's fine if you know that you can submit all requests you may have through the block...
> At least on the Linux side `blk_queue_depth()` can be used to grab nr_requests I am not sure whether it can be trusted in case of SATA (or even SAS)...
@snajpa I think it applies to HBAs also. I don't think I ever saw Broadcom firmware to pass through SCSI task set full statuses from the disks to OS, but...
At very least `checkstyle` is unhappy: "./cmd/zpool/zpool_main.c: 10617: line > 80 characters". I personally not sure it is not too trivial -- the man pages are there and this is...
> Would the inverse of [#15022](https://github.com/openzfs/zfs/pull/15022) be possible with this? @BoBeR182 No. RAIDZ shrinking (stripe width reduction) is impossible even theoretically, since some disks would get two blocks from one...