robust_loss_pytorch icon indicating copy to clipboard operation
robust_loss_pytorch copied to clipboard

Did the weight_decay needed?

Open wzn0828 opened this issue 4 years ago • 2 comments

Hi, thanks for your wanderful work.

As I use your AdaptiveLossFunction, I found the alpha did not decrease, it keeps the highest value through the training process.

So, I used the weight_decay to the alpha and the scale. However, I think the weight_decay should not be used for the two parameters.

What' your opinion?

wzn0828 avatar Apr 28 '21 01:04 wzn0828

If the alpha value stays large throughout optimization, it sounds like your data doesn't have very many outliers, in which case you'll probably get optimal performance by just allowing alpha to be large. Regularizing alpha to be small does not make much sense to me unless you have a prior belief on the outlier distribution of your data. If you want to control the shape of the loss function, I'd just use the general formulation in general.py, and set alpha to whatever value you want.

jonbarron avatar Apr 28 '21 03:04 jonbarron

Ok, your answer addresses my mystery. Thank you very much.

wzn0828 avatar Apr 28 '21 03:04 wzn0828