NiftyNet
NiftyNet copied to clipboard
Evaluation doesn't consider save_seg_dir of Inference and compares lables to labels
Documentation lacking
While I was able to understand training and inference actions from the configuration file documentation, the evaluation action was less clear. For starters, it's not mentioned in the overview.
Intuitively, I'd expect the evaluation to either
- run inference as configured (maybe without creating inference output)
- or read the output from a prior run of inference, where such output is found according to the inference config section
before evaluating against the ground truth data set of the custom application section.
Issue
Running the classification application (btw not listed in the config doc) however, the evaluation reports perfect scores in save_csv_dir
. I assume this SO question is the same problem.
This is because comparison against labels
(in the case of classification application at least) defaults to labels
when inferred
is not found, implemented in add_inferred_output_like.
If I simply define inferred
to point to the inferred.csv written by the prior inference run
[inferred]
csv_file = model_dir/save_seg_dir/inferred.csv
it works as expected (2.)
So I infer that inferred
in not correctly inferred when running evaluation :innocent:, not respecting save_seg_dir.
About comparing to label
instead
I guess that is a good idea for testing / dry runs, but should it default to that silently? When inferred
isn't found I'd expect at least a log entry that alerts me to that. In this case I only see that inferred.csv pointing to label files is written again to the right save_seg_dir
actually overwriting the correct one from prior inference.
Hello,when I configure the Inference section during the training process, no csv_file output,why is this ? Can you tell me how to solve the problem?