Ranger-Deep-Learning-Optimizer
Ranger-Deep-Learning-Optimizer copied to clipboard
Did you try to fine-tune transformers LM with Ranger?
Recent transformers architectures are very famous in NLP: BERT, GPT-2, RoBERTa, XLNET. Did you try to fine-tune them on some NLP task? If so, what was the best Ranger hyper-parameters and learning rate scheduler?
Testing for XLnet should be prioritarised as it is the current best state of the art. ERNIE 2.0 would be interesting too.
@avostryakov I tried fine-tuning a BERT based model for joint NER and relation classification. It performs about ~1.5% worse for my tasks than the AdamW implementation in Transformers:
AdamW
- lr: 3e-5
- betas: (0.9, 0.999)
- eps: 1e-6
- weight_decay: 0.1
- correct_bias: True
Ranger
- lr: 3e-4
- betas: (0.95, 0.999)
- eps: 1e-5
- weight_decay: 0.1
It is possible that with more tuning I might be able to close the gap. If anyone else has any tips for fine-tuning BERT with Ranger, please let me know!
I'm working with DETR which is object detection with transformer internally and will test it out there soon.
Note that Ranger now has GC (gradient centralization) and will be interesting to see if that helps for transformers.
How does ranger perform for Detr?