Zygo

Results 411 comments of Zygo

I tried to reproduce this on 5.4.42, and got the following: ``` 00.01 Scanned 2214 retained 0 Deduplicating volume /tester/ Deduplicating volume /tester/current 00.67 Size group 1/18 (68821971) sampled 2...

In 5.14 there was a change that made duplicate dirents frequently during log tree replay, which was fixed in 5.16-rc1 (9a35fc9542fa btrfs: change error handling for btrfs_delete_*_in_log). The repro recipe...

> I removed 2 drives by using parted / wipefs to remove the BTRFS partitions followed by a 5 sec shred command on 2 of the 3 drives just to...

> This is generally not how the average raid 10 works. All raid10 implementations tolerate 1 to N/2 failures at random, depending on which devices fail, but the maximum _guaranteed_...

> do you know of any scripts that are in development, for example that use leaf data to rebuild the meta data Development at https://lore.kernel.org/linux-btrfs/[email protected]/ has been going on since...

Something is wrong here...a 1G filesystem should have a much smaller global reserve. I have filesystems from 14G to 96G that have 16M to 227M global reserve. 128G and larger...

That code sets the maximum size. The minimum size is based on the size of the filesystem (or should be, but seems to be failing here).

Fatal errors are conditions marked in the kernel source with the `btrfs_panic` function family. These are the current set of conditions: * extent tree modified while locked * backref cache...

Note that if the device places both copies of _any metadata page_ in the same failure domain, the filesystem can be destroyed (at least until we have btrfs repair tools...

Encryption solves only problem 1. The metadata write pattern is easy to recognize even without knowing the block contents. To use encryption to solve problem 2, you'd have to encrypt...