TBraTS
TBraTS copied to clipboard
Loss conflict between evidential cross-entropy and dice
When I tried to train my dataset with your given loss function, I found that the loss combination of ice (with kl) and dice doesn't work well. Their conflict leads to the result that the network output is all 0 both to the target and the background as well. I wonder if you're willing to help to deal with such problem. And thank you for your code shared, it really helped a lot.
This may be a problem with your gt and dice loss function settings? Can you provide more details?
I can get the good result by normal Unet with dice loss using softmax or evidential ce loss with softplus, but when I tried to put evidence from softplus into the loss combination of dice and evidential ce or just dice only, the net tended to make all the prediction (each class of all the pixels) to zero. I have tried to change different activation function of evidence theory like: elu+1, relu, exp. They all have the same problem. I'm confused how to deal with it.