Zygo
Zygo
Sure, just add them to the list in `block_term_signal`... What is SIGUSR1 useful for?
On a machine this size, I wouldn't ever put more than 256M in the bees hash table, and I'd probably even lower it to 128M for performance. Don't forget bees...
Try `btrfs fi defrag -czstd' if it wasn't compressed before: ``` # compsize beeshash.dat Processed 1 file, 65537 regular extents (65537 refs), 0 inline. Type Perc Disk Usage Uncompressed Referenced...
There were some problems with sharing a hash table, which is why the idea got dropped. The fatal flaw is the way that new data evicts old data from the...
> I'd assume you can't dedupe across filesystems. So having a shared hash table doesn't seem ideal, as potentially you'd have items hash to the same value across file systems,...
It comes down to the number of decompressed blocks per extent. `compsize` provides a count of extents. Compressed filesystems typically have smaller extents so they tend to need larger hash...
Anything from 0.5 to 0.9 is fine, and within that range more is usually better. Getting all the way to 0.99 is not usually worth the extra RAM cost, but...
Synology runs on kernel 3.10? Unless they have been backporting a lot, 3.10 would be missing the tree search ioctl and the dedupe ioctl. Not sure if there's much point...
The current bees architecture doesn't have any idea how much work remains. bees asks btrfs if there's new data, and if btrfs says no, then bees is done; otherwise, there...
That's probably my bad. I changed the startup order so the command-line option parser could invoke methods on BeesContext, but that means BeesContext's constructor now runs before we've parsed the...