brian6091

Results 52 comments of brian6091
trafficstars

Ah got it. That's a pretty easy change, I could do that in a day or so if you want to test it ;-)

So I set alpha=4 as default so that everything should produce scale=1.0 by default (which is what you had before). Hence my last addition of a 'tune_lora_alpha' and leaving 'tune_lora_scale'...

You raise great points that I completely agree with. Using both alpha and scale is not a good idea, and even the LoRA authors ignore this after defining it. So...

> So maybe keeping alpha intact, setting default scale = 1, and exposing it the advanced users might work? > > * during saving, fallback scale into A? > >...

Seems like a good idea. I've been using a [longer set of prompt templates](https://github.com/brian6091/Dreambooth/blob/e13266709b0912ccba1bd219ea34a69325a306da/src/textual_inversion_templates.py#L34) from the author of the textual inversion paper. I've never explored the effects of changing these...

Thanks for the paper! @cloneofsimo this should mix nicely with training/varying rank by block. I'm having a closer read of the paper.

You mean to post-process a model? That would be straightforward to do (there's some experiments in Adobe's Custom Diffusion paper on post-tuning decomposition that might interest you). Rank 128 seems...

Seems like this depends on when intermediate sampling happens. Possibly changing rng state despite attempt to set it back to state before inference?

Maybe pull in the model here: https://github.com/huggingface/diffusers/pull/1930/files

https://www.pythonpool.com/ast-literal_eval/