nnUNet icon indicating copy to clipboard operation
nnUNet copied to clipboard

Masked Loss

Open Hendrik-code opened this issue 2 years ago • 6 comments

Hey,

Thanks for the great work in this! I have a question: I have GT segmentations that are wrong in some scans, and I know roughly where, but it is unfeasible to manually correct my thousands of scans. What I like to do is give the nnunet a mask as second channel where I know the GT is most certainly incorrect. Then, I could ignore the loss in that masked area. As far as I can see in the code, there is no option to mask the loss in some way or another, right?

Additionally, lets say I have some scans where one label is completely missing in the GT (but should be there). For me, it would also make sense to allow this kind of data where I just mask out the loss for any label that is not present in the GT of an individual scan. Currently, it would learn to not segment that label, ruining the intent. Could be a flag in the dataset.json for example.

Can you recommend me a way to implement these (point to some lines of code, perhaps) into the nnunet framework? That would be very helpful! Thanks in advance! Hendrik

Hendrik-code avatar Sep 07 '23 09:09 Hendrik-code

Hello all,

I am very interested in this feature as it is closely related to the #1471 if we consider a voxel weight of 0 where we want to ignore.

Thanks !

Thibescobar avatar Sep 13 '23 16:09 Thibescobar

Hey @Hendrik-code,

I am one of the nnU-Net maintainers. We are planning something like this for the future. Could you give me your email then I am able to give you further details.

Best, Karol

Karol-G avatar Sep 27 '23 14:09 Karol-G

Hello Karol, I'm very interested in this feature, can you give me more details about it? My email is: [email protected]

haibinswe avatar Sep 29 '23 09:09 haibinswe

Hello Karol, Huang,

I am also very interested, if I can be included in the loop it would be great. Thank you very much.

Best regards. Thibault

Le ven. 29 sept. 2023 à 11:33, Huang Haibin @.***> a écrit :

Hello Karol, I'm very interested in this feature, can you give me more details about it? My email is: @.***

— Reply to this email directly, view it on GitHub https://github.com/MIC-DKFZ/nnUNet/issues/1683#issuecomment-1740592515, or unsubscribe https://github.com/notifications/unsubscribe-auth/AN2TFN4X2ZQNZOKUFLLJTEDX42IV5ANCNFSM6AAAAAA4OT62XU . You are receiving this because you commented.Message ID: @.***>

Thibescobar avatar Sep 29 '23 10:09 Thibescobar

Hello @Karol-G ,

thanks for your response, I very much appreciate the offer. My mail is: hendrik.moeller[at]tum.de

Best regards, Hendrik

Hendrik-code avatar Oct 01 '23 08:10 Hendrik-code

Hello Karol, I'm highly intrigued by this functionality. Could you please provide me with additional information regarding it? You can reach me via email at thibescobar[at]gmail.com

Thank you very much. Best regards.

tbskbr avatar Oct 27 '23 08:10 tbskbr

Hello @Karol-G ,

did you write me a mail about this yet? I would still be very interested in this functionality and could also possibly help with its implementation.

Best regards, Hendrik

Hendrik-code avatar Feb 27 '24 08:02 Hendrik-code

Hello @Karol-G,

Have you sent an email regarding this? I'm still highly interested in this feature. Could you please send me at email [email protected]

Best, Piyalitt

piyalitt avatar Mar 12 '24 19:03 piyalitt

Hey all,

there will be an update for the nnU-Net this week in which we make all the details public for training on sparse annotations / masked regions / with a masked loss. So stay tuned :)

Best regards, Karol

Karol-G avatar Mar 18 '24 15:03 Karol-G

Hey all,

Today, we officially released a new feature for the nnU-Net: training with a so-called ignore label. This feature masks out all pixels during the loss computation that are labeled with the ignore label, making it possible to train with sparsely annotated data or to mask out faulty regions in reference segmentations. For more details, see the ignore label documentation here. Although the ignore label functionality has been in place for some time, its official release was delayed until after completing its evaluation. Find more information in our preprint paper here.

Best regards, Karol

Karol-G avatar Mar 20 '24 15:03 Karol-G

Great update. I'll test it today.

Thibescobar avatar Mar 27 '24 12:03 Thibescobar

Hey, thanks for the effort, that looks very nice. Will test it out soon. @Karol-G

However one question: I don't believe this addresses the second question I had. Let's say I have two samples A and B, both have classes 1 and 2 in the images. In A, only class 1 is (densely) annotated, in B, only class 2 is annotated. (Background is class 0)

Now I want to jointly train them so the model learns both classes.

For the ignore_label to properly work in this scenario, I would need to be able to describe the background in both samples (and label everything else not belonging to the annotated classes as ignore_label) or roughly estimate the location of the missing class in both samples so I could paint it with the ignore_label.

I cannot automatically know where the background is in A or B, as in both, one object (of the class not annotated) is missing and could be anywhere in the images. Much similar with roughly knowing the location, this is unfeasible in lots of datasets. Whatever I put as the ignore_label, the model would either not learn the background or wrongly-learn one of the classes (seeing it both as background and the correct object). Am I missing something or is this correct?

Best regards, Hendrik

Hendrik-code avatar May 07 '24 12:05 Hendrik-code

Hey @Hendrik-code,

A potential solution to this problem could be to sparsely label the background as well. You can annotate (either manually or automatically) only the areas where you are confident it is background and mark everything else with the ignore label, including any missing classes in the sample. This approach allows you to train on both classes with minimal effort in annotating the background. In my experience, a very coarse or scribble-based segmentation can be done quickly, as the background class is usually easy to identify.

Additionally, it's important to ensure that in samples where you have class 1 or 2 segmented, you provide enough border annotations for the model to understand the transition between classes. For example, if sample A has class 1 densely annotated, you should include a complete contour annotation with class 0 around class 1 or at least some sections of the contour. This helps the model accurately learn the boundaries between different classes.

Best regards, Karol

Karol-G avatar May 21 '24 09:05 Karol-G