RapidCRC-Unicode icon indicating copy to clipboard operation
RapidCRC-Unicode copied to clipboard

Doesn't report failed files in batch operations

Open ghost opened this issue 5 years ago • 8 comments

When checking multiple sfv's in a folder with subfolders it fails to report what files are missing/bad/corrupt, it only lists a total. Seems the log window is reset for each sfv file, it should keep a list or have some kind of output of the failed to validate files.

ghost avatar May 31 '19 16:05 ghost

Ouch. Any kind of screenshot?

vatterspun avatar Jun 02 '19 14:06 vatterspun

The last sfv list was in the log window (of good files). Granted I am checking > 5000 folders which may be too much for it.

ghost avatar Jun 03 '19 02:06 ghost

It clears the log window with every loaded .sfv, that seems to be the issue. Quick and easy to test this.

ghost avatar Jun 05 '19 00:06 ghost

lets review

scenario: hundreds of individual folders with .sfv in them each. means of initiating rapidcrc: right clicking parent folder -> rapidcrc -> open all hash files

what should happen: broken or/and missing or/and erroneous entries should be left visible on log screen

what really happens: only 'currently checking' entries shown. rapidcrc thus becomes useless for its purpose, which is to pinpoint broken or/and missing files.

ghost avatar Jun 05 '19 07:06 ghost

Do you have job queueing enabled? This will allow multiple jobs to be displayed in the list at the same time. Otherwise each new job clears the list (this is the behavior of the old plain rapidcrc). Checking more than one sfv file in one go is really only meant to be used with this enabled.

OV2 avatar Jun 05 '19 09:06 OV2

Was suffering from same problem here, enabling Job Queuing indeed fixes it.

However, enabling Job Queuing restricts RapidCRC in one instance mode only, forcing serial operations as opposed to scaling out in parallel.

Suppose I have multiple independent paths/drives to check, job queuing works against me, because each moment device bandwidth isn't used, it's wasted. And one instance is just going over things in an order, which is hardly beneficial if I want to get a lot of batch checking done in parallel.

Here's what I propose: each RapidCRC instance detects whether one or multiple input hash files are being dealt with, and acts accordingly. Enabling "job queuing" seems like a kludge to counter for defective program logic. Why should any of this be user's problem? Just accept the multiple hash files, or one hash file, and show me meaningful results; let me work in parallel if I so decide. Sheesh. :confused:

SuperHirmu avatar Jun 05 '19 10:06 SuperHirmu

It's not as easy as you make it sound. Job queueing is an option because it changes the behavior of the main file list. Some people like to always only have one job visible, and thus want the list to auto clear if they start a new job. Others like to drop one svf file after the other onto the window and have it queue up.

The same is true for operating from the shell extension - some like you want to have multiple instances, while others like to queue up their sfv files into one window.

You can still have multiple instances if job queueing is enabled, you just have to start them manually and then use their open buttons or drag&drop.

What might be possible is to temporarily enable job queueuing if it is disabled and you then start a multi-sfv verification. If I find some free time to work on rcrc again I'll see if that is feasible. The other possibility would be an option so that the shell extension still spawns new instances even if job queueing is enabled.

OV2 avatar Jun 05 '19 10:06 OV2

It could spawn multiple instances and leave them open if invalid files are found or have some way to output the results to a file, which would be desirable regardless.

Command line options for creating/check would also be welcome, but I'd have to open a feature request for that.

ghost avatar Jun 05 '19 20:06 ghost