multi-task-learning-example-PyTorch icon indicating copy to clipboard operation
multi-task-learning-example-PyTorch copied to clipboard

Homoscedastic Loss Function

Open nivesh48 opened this issue 3 years ago • 6 comments

This isn't an issue but a doubt i would like to clarify. When i am using the homoscedastic loss for my area of research the loss values are in negatives and starting to converge in negative. Is this behavior natural for this multi task loss or am i doing any mistake?

nivesh48 avatar May 16 '21 07:05 nivesh48

This isn't an issue but a doubt i would like to clarify. When i am using the homoscedastic loss for my area of research the loss values are in negatives and starting to converge in negative. Is this behavior natural for this multi task loss or am i doing any mistake?

Same question. I also observed that the log_var values continue decreasing in negative, and the total loss continues decreasing. Have no idea and doubt if the model can converge.

zhackzey avatar Jun 30 '21 08:06 zhackzey

I think the problem arises from the fact that the implementation doesn't use See Issue 3

dariocazzani avatar Dec 13 '21 21:12 dariocazzani

I think the problem arises from the fact that the implementation doesn't use See Issue 3 I dont think the formula is wrong. The uncertainty parameter is log \sigma^2. Thus,exp(-log\sigma^2)is 1/sigma^2.

zhackzey avatar Dec 14 '21 03:12 zhackzey

Hi @zhackzey I think as Issue #3 reported, The uncertainty parameter is σ^-2, and the uncertainty penalty is logσ, when taking the exp in the code, exp(-logσ) = σ^-1 which is different with uncertainty parameter σ^-2 in the paper.

tong-zeng avatar Feb 06 '22 17:02 tong-zeng

Hi @zhackzey Just realized that the author corrected the formula in a new version of the paper, https://arxiv.org/pdf/1703.04977.pdf

tong-zeng avatar Feb 06 '22 18:02 tong-zeng