Optimize DoRA computation when there is no dropout
Feature request
DoRA could be made faster and to use less memory if the base result were reused for DoRA. However, this is only equivalent if there is no dropout (because the base result will have dropout applied). Therefore, an optimization could be done when dropout=0 (i.e. when nn.Identity is used) or during eval mode.
Motivation
Faster and more memory efficient DoRA when there is no dropout. Experimentally, dropout is not crucial for training DoRA, see this comment.
Your contribution
I can work on this when I have a bit of time but contributions are very welcome.
Hey @BenjaminBossan I would love to work on this.
Should I create a PR and then have the rest of the conversation there?
Thanks @ariG23498. Do as you like, if you have code feel free to create a (draft) PR, otherwise discussing here is also fine.