Zygo
Zygo
My proposal so far is to stop bees when `transid_min()` returns a value that is equal or larger to what `transid_max()` returned when bees started up. So everything bees does...
Yeah, some mechanism (that is less crude than the hammer that is `--workaround-btrfs-send`) for selecting which subvols to ignore might be nice. I think the goal for many people is...
There are some anomalies with the way hash table organization (total size of table and number of bucket cells per hash) affects dedupe rate. Once you reach a certain size,...
> While I'm running with only 75% of my usual amount on RAM (due to a bit-flip error in one memory module), I discovered that RAM is a very precious...
bees saves a few bits per hash table entry by encoding some data in the position within the hash table (i.e. some bits of the hash form the page address...
> What happens when you have an existing beeshash.dat hash table and you delete it and recreate it? The hash table becomes empty. You must also delete `beescrawl.dat` when you...
> explain very generally how to size a hash table - when I put my sysadmin hat on, I can't really determine what the optimal size actually is... When you...
> [NTFS] needs a garbage collector (to remove unused chunks from the big hidden file) bees uses btrfs's backref counting to achieve this. It is similar in some ways, just...
> the service wrapper script should kill the beescrawl.dat file when it encounters a hash table size change? If the hash table is _deleted and recreated_, so should `beescrawl.dat`. If...
'Gracefully' depends on what you think is 'graceful'... When bees (master branch) receives `SIGTERM` it tries to complete the current `ioctl` calls, save dedupe scan/crawl progress, save the in-memory hash...