Ranger-Deep-Learning-Optimizer icon indicating copy to clipboard operation
Ranger-Deep-Learning-Optimizer copied to clipboard

Did you try to fine-tune transformers LM with Ranger?

Open avostryakov opened this issue 5 years ago • 4 comments

Recent transformers architectures are very famous in NLP: BERT, GPT-2, RoBERTa, XLNET. Did you try to fine-tune them on some NLP task? If so, what was the best Ranger hyper-parameters and learning rate scheduler?

avostryakov avatar Sep 17 '19 08:09 avostryakov

Testing for XLnet should be prioritarised as it is the current best state of the art. ERNIE 2.0 would be interesting too.

LifeIsStrange avatar Oct 06 '19 19:10 LifeIsStrange

@avostryakov I tried fine-tuning a BERT based model for joint NER and relation classification. It performs about ~1.5% worse for my tasks than the AdamW implementation in Transformers:

AdamW

  • lr: 3e-5
  • betas: (0.9, 0.999)
  • eps: 1e-6
  • weight_decay: 0.1
  • correct_bias: True

Ranger

  • lr: 3e-4
  • betas: (0.95, 0.999)
  • eps: 1e-5
  • weight_decay: 0.1

It is possible that with more tuning I might be able to close the gap. If anyone else has any tips for fine-tuning BERT with Ranger, please let me know!

JohnGiorgi avatar Oct 11 '19 20:10 JohnGiorgi

I'm working with DETR which is object detection with transformer internally and will test it out there soon.
Note that Ranger now has GC (gradient centralization) and will be interesting to see if that helps for transformers.

lessw2020 avatar Jun 11 '20 19:06 lessw2020

How does ranger perform for Detr?

hiyyg avatar Sep 25 '23 00:09 hiyyg