Kai Krakow
Kai Krakow
I think the latter variation is what most people want who use this feature. Tho, the former variation would achieve a better throughput over time and keep the backlog smaller......
Ah, the word "temporary" is a good point. :-)
While I'm running with only 75% of my usual amount on RAM (due to a bit-flip error in one memory module), I discovered that RAM is a very precious resource...
> Oddly, I find that _reducing_ the amount of available RAM helps with latency, e.g. put all the rsync processes into a cgroup, then set that cgroup's `memory.limit_in_bytes` to 1GB...
I wonder if this results from a problem documented in the kernel as "allocstall"... I changed kswapd watermarks and situation seems to improve. ``` vm.watermark_scale_factor=200 ``` Its result is that...
@Massimo-B Maybe you are confused by how truncate works when you looked at the script: Truncate does what the name says: It truncates the contents of the file at the...
NTFS dedupe attacks the problem from a very different vector. As far as I know, it creates a hidden file somewhere in `System Volume Information` and stores all the duplicate...
> > [NTFS] needs a garbage collector (to remove unused chunks from the big hidden file) > > bees uses btrfs's backref counting to achieve this. It is similar in...
> > What happens when you have an existing beeshash.dat hash table and you delete it and recreate it? > > The hash table becomes empty. You must also delete...
It's safe to kill bees the hard way at any time... It may just forget about what it was doing the last 15 minutes and rescan those file data the...