RageLtMan

Results 343 comments of RageLtMan

Just noticed #3108 appears to have the same problem. Did we introduce a regression which is causing corruption in active datasets? Outputs for zdb commands on the dataset: ``` zdb...

This is going from bad to worse. Attempting to copy data out of the dataset into another pool resulted in IO errors on read. This hard locked the host. I...

I ran a scrub, which changed the output of zpool status -v to something even less pleassant: ``` zpool status -v rpool pool: rpool state: ONLINE status: One or more...

Finally some good news, or lack of it getting worse anyway. After the last scrub i rebooted the host in the hopes that an import will actually free the "freeing"...

I think the grievous error was introduced when i stupidly tried a rollback on a broken dataset. Wonder if even the gurus know how to fix this. Far as the...

@kernelOfTruth i just noticed your spacemap was also zero, and you're still running the two PRs i removed. #2909 looks more likely the culprit, and i've lodged a question asking...

@edillmann : One of my pools is undamaged, so removing the patch from the stack appears "safe enough," provided you dont have additional errors coming up like the one which...

As the pool runs, do those numbers get worse? Far as what to do, i'm not the authority on this, but so far i've been tackling this with a slim...

This looks promising. _zdb -mc_ on the production host which was given the new slimmed down patch stack, then scrubbed, then verified using zdb, returned the following: ``` Traversing all...

@kernelOfTruth I dont think that #2909 is the only cause for this - i'm pretty sure other things can cause this such as Linux trying to add ACLs after the...