nnUNet
nnUNet copied to clipboard
Missing summary.json and inference without training all 5 fold
Hi,
-
my training was completed after 1000 epochs and continued for the validation set. However, I encountered the following output stating that the preprocessing for 3d_casecade_fullres is missing. I'm a little confused here because the prediction for 3d_casecade_fullres seemed automatic here after the training for 3d_lowres and I wonder when and how I should preprocess under this circumstance.
-
Besides the 3d_casecade_fullres, I would also like to check whether my validation output for 3d_lowres is completed here. Are these
.nii.gz
files the predicted label of validation cases? I also noticed I'm missing thesummary.json
file here and wonder why. Is there any other way I find the performance measurement (dice) for my model other than looking at the log? -
I am currently in the process of training fold 3 and 4 for this configuration, and would like to have a preview of how well can the model inference. Just curious is it possible to conduct inference before all 5 folds are trained? Can I use
checkpoint_latest.pth
instead after the dice score become stable during training process?
Thank you so much!!
For predicting test files, you can use -f
flag to control which fold do you want to use.
However, for 5 folds cross-validation, I think that 5 folds are required to present.
Hi @lolawang22,
Concerning your question(s):
-
When preprocessing, did you change the -c config? Per default, both 2d, 3d_lowres and 3d_fullres should be preprocessed and this shouln't be an issue. You can check your preprocessed data folder for the folder names
nnUNetPlans_2d
,nnUNetPlans_3d_lowres
,nnUNetPlans_3d_fullres
. -
Not having the summary.json present in your validation folder seems like your validation didn't finish. Maybe you ran out of memory during inference?
-
Thankfully @thangngoc89 already hinted at a solution. Let me know if there's anything missing.
Thank you for the advice! I'll try to see if these can be solved.
Closing this issue for now, as it has been stale for a while. You are welcome to re-open if the problem persists!