LoRA
LoRA copied to clipboard
Question about reproducing RoBERTa base Fine-tune
I want to reproduce the performance of RoBERTa base Fine-tune. Modifying the apply_lora from True to False in the roberta_base_cola.sh code does not produce proper performance. What else should I do?
We took these numbers from previous papers.
The LR is tuned for LoRA which is why simply turning LoRA off doesn't reproduce the numbers.