Haravikk

Results 84 comments of Haravikk

Isn't the amount to be issued always just the size of the pool at the time the scrub/resilver begins? I've never seen a scrub or resilver fail to give the...

> Speed of both scan and issue phase could wildly vary But that's kind of my point; the scan speed isn't useful for calculating a completion time estimate, the only...

> Why not write a wrapper around zpool scrub that remembers data size and duration of the last scrub performed, kicks off the new one and then compares that with...

> Except it is not broken. Maybe not strictly, it's working as implemented, but it's largely useless in its present form. > Averaging could have little sense if workload vary...

Coincidentally, I've got a scheduled scrub ongoing on a pool at the moment, and it's a pool that doesn't have a lot of other activity going on so it's an...

How have I been using ZFS for years and completely missed the `zpool wait` command? That's probably all I actually need, there's no particular benefit to adding redundancy to other...

> tbh i think this would just be a nice-to-have if we could have `resilver_defer` improvements like #14505, so it would just cleanly start them all rather than queuing them...

Is what you're thinking of for compression something like I've described in #13107? ZVOLs can already see substantially better compression than the same files in a dataset, because a ZVOL...

Ah, I see, so this is more about the block level (clue was in the name wasn't it, d'oh!). That's interesting, as while it _technically_ still has the same fragmentation...

I read the article, but the problem is that ZFS' pointers will only currently have precision down to the physical block (`ashift`) level, so has no ability to address a...