AdaptSum icon indicating copy to clipboard operation
AdaptSum copied to clipboard

gradient explosion in TAPT DAPT pretraining

Open Danshi-Li opened this issue 3 years ago • 2 comments

Hi, I am trying to reproduce the results of AdaptSum and met problem when pretaing the model in the TAPT setting. It worked quite well for the science and debate datasets, where the data size is small. However, when I trained TAPT for social media, the loss function got exploded:

I run pretraining with command: python ./src/tapt_pretraining.py -path=./dataset/'social media'/TAPT-data/train.source
-dm='social media'
-visible_gpu=1
-save_interval=1000
-recadam
-logging_Euclid_dist

and the training process witnesses the loss exploding to NaN: (Epoch 0) LOSS: 2.291335 Euclid dist: 322.301648 13% 1999/15089 [17:55<1:47:14, 2.03it/s] (Epoch 0) LOSS: 2.246833 Euclid dist: 959.653581 20% 2999/15089 [26:46<1:39:52, 2.01it/s] (Epoch 0) LOSS: 9.272711 Euclid dist: 1541903563718079518205927655211008.00000 33% 3999/15089 [35:40<1:46:22, 1.74it/s] (Epoch 0) LOSS: nan Euclid dist: nan 40% 4999/15089 [44:14<1:21:29, 2.16it/s] (Epoch 0) LOSS: nan Euclid dist: nan 46% 5999/15089 [52:34<1:10:48, 1.80it/s] (Epoch 0) LOSS: nan Euclid dist: nan 53% 6999/15089 [1:01:15<1:14:45, 1.49it/s]

I tried to lower learning rate to 0.01 and adjust the gradient clip value, it put the time of loss explosion later, but didn't solve the problem. Am I missing something or doing it wrong? What should I do in order to control the model?

Danshi-Li avatar Apr 30 '21 13:04 Danshi-Li

Hi Danshi,

When using RecAdam in TAPT, you can also set different "anneal_t0" and "anneal_k", because RecAdam optimizer is very sensitive to these two parameters. In our experiments, as reported in the paper, "we select the best t0 and k in {500, 600, 700, 800, 900, 1, 000} and{1e−2,1e−3,1e−4,1e−5,1e−6}". So the default number of these two parameters may cause the loss explosion problem.

TysonYu avatar May 05 '21 05:05 TysonYu

Hi @TysonYu
Great work, I have a similar question regarding SDPT pretraining Gradient is exploding so early. I was wondering what were the optimum numbers of for t0 and k in that case?

EngSalem avatar Jun 27 '22 23:06 EngSalem