trapexit
trapexit
Yes, `dd` is your friend here. For apps which don't write large sets of data the writeback cache can help.
It would require a big'ish change in the way it processes data and data representation. Also, with mergerfs, you can't know for sure that it'd actually improve performance. I don't...
Now that I think about it time might be a good limit to add. But I just took the approximate speed of my drives and then divided by how much...
statistically over time random should be the same but I understand the concern. I'd have to think about it. There are a number of ways you could do it but...
Another idea could be to limit the throughput of scorch and then just run it in a loop as a service. You'd have to have a way to monitor the...
https://github.com/trapexit/scorch/releases/tag/1.0.0 Should address your saving to a log problem. Also includes last checked timestamp and sorting by that timestamp. I don't have a "don't check things checked within x days"...
You need to provide more detail. How are you running the script? There is a difference between "CHANGED" and "FAILED". "CHANGED" is a file that is no longer the same...
If you don't want to find changed files (which could be risky) then set the "diff-fields" to an empty string.
Actually. Looks like it skips to hash checks if the diff check doesn't return so those files will show as failed instead of changed. I guess I could either change...
As for regex... it's just regular python regex and should work with any common regex pattern. What are you trying to do?