fsdp_qlora icon indicating copy to clipboard operation
fsdp_qlora copied to clipboard

DoRA training not taking dropout or alpha into account

Open BenjaminBossan opened this issue 6 months ago • 0 comments

I think there is a bug in the DoRA implementation as it takes neither lora_dropout nor lora_alpha into account. These arguments are passed as *args to the __init__ call of the DoRA layers but subsequently ignored inside of dora.py. This can be easily missed as the DoRA paper does not include them in their equations, but they are mentioned elsewhere in the paper and should be applied the same as in the LoRA implementation.

Also note that lora_dropout is only applied to the LoRA/DoRA output, not the base model output, which I believe has an impact on these lines, as they currently assume that the same x is used for the base layer and the DoRA part.

BenjaminBossan avatar Aug 15 '24 10:08 BenjaminBossan