LoRA
LoRA copied to clipboard
Can't reproduce the results for GLUE and hyperparameter misalignment
Hi, Thanks for the great work.
I am trying to reproduce the result of Roberta-large on the NLU tasks, however, I got a CoLA score = 0 and MNLI = 31.3 using the provided finetuning scripts, and then I found out that there are misalignments between the hyperparameters in the provided training scripts and those on the paper. For example, in roberta_large_cola.sh the lr is set to 3e-4, but in the paper, it is set to 2e-4. Which settings should I follow to reproduce the reported result?
looking forward to your reply!
Best, Sean