TBraTS icon indicating copy to clipboard operation
TBraTS copied to clipboard

Loss conflict between evidential cross-entropy and dice

Open noteless517 opened this issue 1 year ago • 2 comments

When I tried to train my dataset with your given loss function, I found that the loss combination of ice (with kl) and dice doesn't work well. Their conflict leads to the result that the network output is all 0 both to the target and the background as well. I wonder if you're willing to help to deal with such problem. And thank you for your code shared, it really helped a lot.

noteless517 avatar May 24 '23 05:05 noteless517

This may be a problem with your gt and dice loss function settings? Can you provide more details?

Cocofeat avatar May 24 '23 06:05 Cocofeat

I can get the good result by normal Unet with dice loss using softmax or evidential ce loss with softplus, but when I tried to put evidence from softplus into the loss combination of dice and evidential ce or just dice only, the net tended to make all the prediction (each class of all the pixels) to zero. I have tried to change different activation function of evidence theory like: elu+1, relu, exp. They all have the same problem. I'm confused how to deal with it.

noteless517 avatar May 29 '23 05:05 noteless517