robust_loss_pytorch
robust_loss_pytorch copied to clipboard
Nan occurs in backward loss_otherwise
Hi, I encounter a weird nan error in general.py during training after multiple epochs. Any idea why this error occurs or how to fix it?
Error message of
torch.autograd.detect_anomaly()
.
Cheers and many thanks in advance Christoph
Hard to say without more info, but my guess at the most likely cause is 1) the input residual to the loss being extremely large (in which case clipping it should work) or NaN itself, or 2) alpha
or scale
becoming extremely large or small, in which case you probably want to manually constrain the range of values they take using the module interface.