Alicja Kario
Alicja Kario
many tools are able to ingest `badblocks` format, provide it as an output option
as the scan takes multiple hours, and the results are written only at the end it's easy to loose data if the location turns out to be unwriteable
when the `-r` option is used to specify ranges to test, like: ``` 75000000 78165360 ``` no statistics are shown: ``` reread 101.95% done in 00:01:01, expected time:00:01:00 reread 101.95%...
as the typical scan takes many hours, ability to stop and restart them at will would be useful
When rereading uncertain blocks the percentages reported are invalid: ``` re-reading 10 uncertain blocks reread 1813.79% done in 00:00:00, expected time:00:00:00 re-reading 265 uncertain blocks reread 409.43% done in 00:00:54,...
by storing checksums of the read blocks, it's possible to check if the disk is lying about its ability to read the data from the platters when running in exclusive...
currently the qualification mechanism (for latent or reallocated sectors) uses quantiles, in practice the behaviour of sectors is much more complex – while when the block is in good condition,...
collect data from SSDs, check how they behave when in good condition, after few years of use and when they failed detect SSDs for scans, set block times accordingly
When scan detects that some blocks are slower than expected, writing to them should improve the situation, either by causing reallocation or improving the strength of the magnetic domain. Will...
I've configured a Github Actions matrix build, and while coveralls recognises the build as a parallel and a PR, it doesn't recognise it as a PR for a master branch,...