Kai Krakow

Results 1208 comments of Kai Krakow

@Zygo So after a few days with this patch and a reboot, it looks like bees catches up back to the previous "point" value much faster - essentially within minutes...

> Are you running a version that includes [5c0480e](https://github.com/Zygo/bees/commit/5c0480ec594028a6ce47432bf2824a658ee3eabf) (and the previous commit)? Yes. After a reboot, "point" starts lower as it was left on shutdown. But it then catches...

> hmmmkay, somehow I've edited your comment instead of posting my own. Yay github UI! 🚱 Unedited... It keeps revisions of each change. :-)

> That isn't quite what happens. Well, I let `fdupes` recombine all the files in the snapshots (none of the files that bees was chunking through has been changed between...

Yeah, I think by deduplicating, I only moved work to queue from the current transid range to a future one. But until then, some of the old snapshots may be...

FWIW, applying `duperemove` to the snapshots that bees took an eternity on, sped it up considerably: ``` extsz datasz point gen_min gen_max this cycle start tm_left next cycle ETA -----...

Earlier, I removed your suggested patch from bees again, and let it run without it. It now finished: ``` extsz datasz point gen_min gen_max this cycle start tm_left next cycle...

I don't see a loop here: ``` 2025-10-12 09:52:27 1.34 ref_4298cdd000_189.641M_3: PERFORMANCE: 8.551 sec: grow constrained = 1 *this = BeesRangePair: 128M src[0x231000..0x8231000] dst[0x231000..0x8231000] 2025-10-12 09:52:36 1.34 ref_4298cdd000_189.641M_3: PERFORMANCE: 8.628...

> Resulting in data loss or in filesystem corruption? I think this means "inconsistent data". Meta data itself should be fine - thus no "corruption" of the file system itself....

> Why does recovering only work with maximum 1 bad device? If raid0 on 4 devices can also detect 3 bad devices, why can't raid1 on 4 devices with 3...