tf_unet
tf_unet copied to clipboard
Questions about UNet architecture and Dice loss
Hello,
I have two questions about the UNet architecture and Dice loss function:
- In the UNet paper, there is no Relu layer after the final conv. In experiments of my implemented version of UNet, such a Relu doesn't help. I don't whether the Relu in Line 136, unet.py helps or not.
- IMHO, Dice loss can be regarded as
# of True Positives / (# of Positives + # of False Positives), but the Dice loss in Line 231, unet.py count both positives and negatives if I understand the code correctly.
Please correct me if I am wrong. Thanks.
Hi @haichaoyu thanks for reporting this.
-
I think you're right. The code diverges from the paper architecture. There is no particular reason why I added this. ATM I'm not sure how it affects the performance.
-
Someone send a PR addressing this. Would love to hear your thoughts on this.
@jakeret Hi, I added a new version of dice loss but did not test them. Sorry about that.
Thanks I'll have a look