keras_lr_finder
keras_lr_finder copied to clipboard
Stop when we reach end_lr even if the loss did not diverge
Stop when we reach end_lr even if the loss did not diverge. In some cases, the loss will not increase and diverge, so make sure we stop at end_lr (as the user asked anyway)
Thanks for the pull request!
Why does this situation happen? The finder uses start_lr and end_lr to calculate the rate of lr increase:
num_batches = epochs * x_train.shape[0] / batch_size
self.lr_mult = (end_lr / start_lr) ** (1 / num_batches)
lr *= self.lr_mult
So, after going through all epochs, we should end up roughly with end_lr. If it doesn't happen, it might be better to fix calculation of lr_mult.
HA!
You're 100% correct. I was a bit too negligent with that one. I actually saw a very similar piece of code on another repo (like, almost line-by-line similar), and that one did not call fit()
itself, hence not calculating correctly the good amount of batches etc.
So, when I saw yours, I immediately thought you'd have the same issue, but you actually don't :)
Sorry for the inconvenience!