lora icon indicating copy to clipboard operation
lora copied to clipboard

[Experiment?]Convert lora models of higher rank into lower rank

Open wsxiaoys opened this issue 2 years ago • 3 comments

For unknown reason, anime community seems to always using R=128 as rank size in their lora fine tunning, which creates model size of ~150M. Which I [highly] doubt is ineffective.

https://civitai.com/?query=lora

One experiment might be helpful is to simply use SVD to lower the rank and visualize the result to compare.

wsxiaoys avatar Jan 17 '23 08:01 wsxiaoys

You mean to post-process a model? That would be straightforward to do (there's some experiments in Adobe's Custom Diffusion paper on post-tuning decomposition that might interest you).

Rank 128 seems like a lot, not sure you need that high. I find ~30 pretty good for face details. Maybe you need a lot more for styles with lots of varied detail?

Sounds like a good experiment!

brian6091 avatar Jan 17 '23 08:01 brian6091

Yeah definitely doable experiment. Just for the reference lowering the rank from fully trained model #140

cloneofsimo avatar Jan 17 '23 17:01 cloneofsimo

From my testing, Tthere's also a lot of variance in how the model is trained that needs to be taken into account.

When you use lower LoRA ranks, you generally need to tune the parameters pretty high (LR: 3e-4), whereas with higher ranks it's plausible to use lower learning rates where it wouldn't learn before (LR: 5e-6).

It makes it kind of difficult to test results when changing the LoRA rank drastically since the hyperparameters are very sensitive to what rank you use.

ExponentialML avatar Jan 17 '23 20:01 ExponentialML