brian6091

Results 27 comments of brian6091

What is your prior_loss_weight? Not sure there is a problem per se, as your total loss is decreasing.

I can confirm what @qunash found. With gradient checkpointing enabled, the lora weights for both the unet and the text encoder will not change when printed to screen. However, in...

@kingofprank for joint training, I've enabling gradient checkpointing for the unet only, and just *not* enabling it for the text encoder. This works, and I think you get most of...

> > I face the same problem when opening gradient checkpointing, Is there a way to solve this problem under text-encoder and unet joint training? > > Kohya's repo seems...

If either of you are interested in making a LoRA comparison, I'd be happy to help out on running this

I've posted some images of the effect of tuning both the unet and the text encoder (with prior preservation) in the discussions section: https://github.com/cloneofsimo/lora/discussions/37

> > I've posted some images of the effect of tuning both the unet and the text encoder (with prior preservation) in the discussions section: > > #37 > >...

@JohnnyRacer Are you running in Colab? Which script or notebook are you using. I think I've seen this kind of error when there is a malformed input.

@JohnnyRacer I think adding this flag would work: --with_prior_preservation I would also add this: --prior_loss_weight=1.0 since the default weight is oddly huge