tf-keras icon indicating copy to clipboard operation
tf-keras copied to clipboard

LossScaleOptimizer fails to prevent overflow

Open danijar opened this issue 3 years ago • 6 comments

The loss scale optimizer currently does not reduce the loss scale below 1:

https://github.com/keras-team/keras/blob/d8fcb9d4d4dad45080ecfdd575483653028f8eda/keras/mixed_precision/loss_scale_optimizer.py#L238-L239

My model is stuck not learning for the first ~1M gradient steps with the loss scale at its lower bound of 1.

danijar avatar Feb 20 '22 17:02 danijar

@danijar Can you please share a simple standalone code to reproduce the issue. Thanks!

jvishnuvardhan avatar Feb 21 '22 21:02 jvishnuvardhan

I don't have a simple reproducer, sorry. But it should be easy to see that 1 is an arbitrary lower bound and breaks down for models with large gradients (e.g. many skip connections and image reconstruction loss that is a sum over thousands of pixels).

danijar avatar Feb 22 '22 14:02 danijar

@reedwm could you take a look here? Thanks!

mattdangerw avatar Feb 24 '22 18:02 mattdangerw

This is similar to https://github.com/tensorflow/tensorflow/issues/38357, except that issue involved an older version of the API. The same issue still remains though: the loss scale cannot go below 1.

As mentioned in the other issue, if gradients are still nonfinite when the loss scale reaches 1, every step will be skipped and training will not progress. This behavior is not great, but I cannot think of a good solution here. Ideally, we would raise an error, but that could cause a performance penalty by transferring the loss scale to the CPU, and there isn't a clear place of where in Keras we would check if the loss scale was 1.

We could also allow the loss scale to be below 1. However, this would cause LossScaleOptimizer to no longer fulfills its sole purpose of preventing underflow, as having a loss scale below 1 increases the chance of underflow. Allowing this would still be worth it if it did help mixed precision training. However, I suspect it would not help. For gradients to still overflow with a loss scale of 1, a gradient value would have to be larger than 65504, which is a very large value. I don't know of any models with gradients this large, so I suspect if the loss scale goes to 1, instead NaNs or Infs are being generated in a way other than gradient overflow.

@danijar, there is a workaround in https://github.com/tensorflow/tensorflow/issues/38357#issuecomment-725656372. I suspect allowing the loss scale to go below 1 will not fix the model however. You can alternative try setting some layers to float32 by passing dtype="float32" and seeing if that helps. You can try setting all or most layers to float32, then start switching layers to mixed precision and checking if the loss scale is reaching 1 or not.

/CC @nluehr

reedwm avatar Mar 07 '22 20:03 reedwm

Gradients can go above 65504 quite easily when predicting large images. I already switched to my own implementation of loss scaling that allows values below 1 and that works great. I just posted the issue here in case the Keras team wanted to fix it for others.

danijar avatar Mar 08 '22 03:03 danijar

@reedwm and others, it seems like this issue has not been resolved yet. Could we add a boolean parameter prevent_overflow to LossScaleOptimizer which disables the lower bound if it is set to True?

thijs-vanweezel avatar Apr 23 '23 10:04 thijs-vanweezel