LLaMA-Factory icon indicating copy to clipboard operation
LLaMA-Factory copied to clipboard

MoRA: High-Rank Updating for Parameter-Efficient Fine-Tuning

Open backroom-coder opened this issue 4 weeks ago • 0 comments

https://arxiv.org/abs/2405.12130

MoRA, a PEFT technique that uses a square matrix instead of low-rank matrices. The main idea behind MoRA is to use trainable parameters in a way that achieves the highest possible rank in the space of the model’s original dimensions.

MoRA significantly outperformed LoRA and came much closer to the performance of a fully fine-tuned model with fewer parameters and training steps.

backroom-coder avatar May 29 '24 16:05 backroom-coder