Question about the dice during training.
Thanks for your excellent job! I use the nnU-Net framework from v1 to v2, but I did't understand the dice during training in one epoch. After one epoch , console will print Pseudo dice,.Could you explain what the Pseudo Dice metric represents and how it differs from standard Dice coefficient?
This issue talks about it slightly: https://github.com/MIC-DKFZ/nnUNet/issues/2234#issue-2320469317
The paper also explains it: "For all other networks we interpret the samples in the batch as a pseudo-volume and compute the dice loss over all voxels in the batch."
Yes - so the pseudo dice is only calculated on 50 random foreground patches from the validation cases and not in the original image spacing but the training target spacing