rat-sql icon indicating copy to clipboard operation
rat-sql copied to clipboard

Loss starts to increase during BERT model training

Open saparina opened this issue 4 years ago • 2 comments

Hi, I'm trying to reproduce your results with the BERT model. After ~14000 training steps, loss started to increase. I tried rerun, but it didn't help me. Have you faced this problem? This situation looks similar to #3, #7.

Log:

[2020-07-28T15:25:54] Logging to logdir/bert_run/bs=6,lr=7.4e-04,bert_lr=3.0e-06,end_lr=0e0,att=1
...
[2020-07-29T13:59:21] Step 14100 stats, train: loss = 1.1323808431625366
[2020-07-29T13:59:27] Step 14100 stats, val: loss = 3.3228100538253784
...
[2020-07-29T14:08:51] Step 14200 stats, train: loss = 0.9168887138366699
[2020-07-29T14:08:57] Step 14200 stats, val: loss = 3.5443124771118164
...
[2020-07-29T14:18:30] Step 14300 stats, train: loss = 2.303567111492157
[2020-07-29T14:18:37] Step 14300 stats, val: loss = 4.652050733566284
...
[2020-07-29T14:28:01] Step 14400 stats, train: loss = 95.80101776123047
[2020-07-29T14:28:08] Step 14400 stats, val: loss = 112.55300903320312

saparina avatar Jul 29 '20 16:07 saparina

We also faced the same issue in our training. Our best guess is that it occurred because of gradient explosion. Even when you try to run the model from the previous checkpoint, you will get the same issue sometime down the line. Right now the only choice is to start training from scratch. If you solved the issue with some other techniques please do share.

senthurRam33 avatar Sep 03 '20 06:09 senthurRam33

@senthurRam33 I also think the problem is somewhere in gradients. I changed the loss to 'label_smooth' (see #10) and got more stable training.

saparina avatar Sep 24 '20 14:09 saparina