AlignProp icon indicating copy to clipboard operation
AlignProp copied to clipboard

how to adapt to non square image model training and inference?

Open SkylerZheng opened this issue 2 years ago • 52 comments

can you share how to adapt to non square image model training and inference? is it possible to use stable diffusion pipeline to generate non-square images?

SkylerZheng avatar Oct 12 '23 00:10 SkylerZheng

Hi, you mentioned you will adapt to SDM 2.1, can you specify how you are going to do that?

SkylerZheng avatar Oct 12 '23 00:10 SkylerZheng

@mihirp1998 Any thoughts on this? I tried to adapt to SDM 2.1, the hps score is very low, after training for 20 epochs, it's still around 0.24. Wondering what went wrong with my experiment.

SkylerZheng avatar Oct 13 '23 17:10 SkylerZheng

I haven't tried SDM 2.1 yet, plan to try it in the weekend. Also i'm not sure what's the issue with non square image training? Can you ellaborate more on the issues you are facing with SD 2.1 training and non-square image training can help me with the integration.

mihirp1998 avatar Oct 13 '23 18:10 mihirp1998

Hi @mihirp1998 , thank you very much for the quick response. I am trying to train with SDM 2.1, i changed the height and width in vae from 64, 64 to 96, 96 (512 vs. 768)But the generated images from epoch 0 are non-sense, and the more I train the model, the worse the quality will be. The HPS reward is always in the range of 0.22 to 0.24.

I also tried non-square setting (128, 72), same issue.

I'm wondering besides vae config, what else do I need to change? What's the parameter value 0.18215 here? Do I need to change it for SD2.1?
ims = pipeline.vae.decode(latent.to(pipeline.vae.dtype) / 0.18215).sample

BTW, accelerate does not work for me, so I can only use 1 GPU for the training. I have scaled down the lr to 1e-4 or even 5e-5, no improvement.

config = set_config_batch(config, total_samples_per_epoch=256,total_batch_size=32, per_gpu_capacity=1)

Any advice or help is appreciated! Thanks!

SkylerZheng avatar Oct 13 '23 19:10 SkylerZheng

okay i'll look into sd 2.1.

btw what is the error you get with accelerate in multi-gpu setting? also does accelerate work for you with other repos or is it just with this repo it doesn't work?

mihirp1998 avatar Oct 13 '23 23:10 mihirp1998

@mihirp1998, cool, thanks a lot! When I use accelerate, the training is just hung there, looks like data has not been loaded at all, so no training is happening. I used accelerate with dreambooth model training, it worked. It could be python3.10 and accelerate 0.17.0 are not compatible with my AWS EC2 env. Please let me know if you have any updates on sd 2.1! I tried to load stabilityai/stable-diffusion-2-1 for training, but the losses are nan, I printed the latents values, all are nan, but the evaluation worked fine, very weird, let me know if you have encountered the same problem!

SkylerZheng avatar Oct 15 '23 22:10 SkylerZheng

For accelerate, does it hang after one epoch, or from the the beginning?

can you try removing this line and trying it again:

https://github.com/mihirp1998/AlignProp/blob/a269c5af788792509c0184b0828abcc90f0038ec/main.py#L602

mihirp1998 avatar Oct 16 '23 01:10 mihirp1998

@mihirp1998 From the beginning, I didnot use accelerate for sd1.5, and i was able to replicate your results. Sure, let me try this, thank you!

SkylerZheng avatar Oct 16 '23 01:10 SkylerZheng

@mihirp1998 Still no luck. Have you tried it on SD 2.1, any good news?

SkylerZheng avatar Oct 16 '23 17:10 SkylerZheng

@mihirp1998 This is the training log with sd 2.1, the loss does not drop but increase gradually... image

SkylerZheng avatar Oct 17 '23 16:10 SkylerZheng

can you maybe try lower learning rates to see if the loss goes down?

I did try sd 2.1-base and found a similar issue of loss not going down. I think i'll have to look into it more closely to get it to work.

Probably playing with learning rate or parameters to adapt (lora vs unet vs changing lora dimension) might be worth trying.

mihirp1998 avatar Oct 17 '23 17:10 mihirp1998

Also i'll recommend directly trying SDXL instead: https://stablediffusionxl.com/

As i think it's probably better than SD 2.1

mihirp1998 avatar Oct 17 '23 17:10 mihirp1998

Hi @mihirp1998, thank you very much for the confirmation! I did try different lora rank, and different learning rates, none of them worked. Unfortunately, SDXL is too big for us, we can only consider sd 2.1, I will also keep looking into this and keep you posted! BTW, accelerate now worked with multiple gpu for me, thankfully!

SkylerZheng avatar Oct 17 '23 18:10 SkylerZheng

I see, what changed in accelerate to get it to work?

mihirp1998 avatar Oct 17 '23 18:10 mihirp1998

I see, what changed in accelerate to get it to work?

I honestly do not know. Maybe the system updates helped...

SkylerZheng avatar Oct 17 '23 20:10 SkylerZheng

@mihirp1998 This is the training log with sd 2.1, the loss does not drop but increase gradually... image

are these curves with sd 2.1 or sd 2.1-base?

If they are with sd 2.1 then how did u fix the nan problem?

mihirp1998 avatar Oct 17 '23 20:10 mihirp1998

@mihirp1998 This is sd 2.1, I used pipeline.unet to do the prediction instead of unet. But this is a bit different from your original lora setting. The loss increases I believe it's the lr is too big, as I reduced the per_gpu_capacity to 1 but the lr is still se-3. When i changed the lr from 1e-3 to 1e-4, the loss does not drop, nor increase. image I also tried the new lora setting with sd 1.5, seems not working well. check the orange wandb logs attached. image

SkylerZheng avatar Oct 17 '23 20:10 SkylerZheng

I see, so i'm assuming u r not updating the lora parameters anymore but the whole unet?

Also can you try setting : config.train.adam_weight_decay = 0.0

try both settings updating with and without LoRA, i'm not sure why are u get nan with lora

mihirp1998 avatar Oct 17 '23 21:10 mihirp1998

No, I did freeze the unet, but only updating lora, otherwise, the memory will explode as you mentioned in your paper. Let me try config.train.adam_weight_decay = 0.0. Are you not getting nan problem with sd 2.1?

SkylerZheng avatar Oct 17 '23 21:10 SkylerZheng

I don't understand how this fixes the nan problem? Like what's happening here and how does this change anything?

I used pipeline.unet to do the prediction instead of unet. But this is a bit different from your original lora setting.

mihirp1998 avatar Oct 17 '23 21:10 mihirp1998

I don't understand how this fixes the nan problem? Like what's happening here and how does this change anything?

I used pipeline.unet to do the prediction instead of unet. But this is a bit different from your original lora setting.

It's weird indeed, but seems like the lora layers added do not work for SD 2.1. I'm thinking we can try other ways of lora for SD 2.1, for example, peft.

SkylerZheng avatar Oct 17 '23 21:10 SkylerZheng

Okay sure, but do u know what results in the nan outcome at the first place?

btw i tried sd2.1-base with setting config.train.adam_weight_decay = 0.0 and i find the loss to go down.

mihirp1998 avatar Oct 17 '23 21:10 mihirp1998

Okay sure, but do u know what results in the nan outcome at the first place?

btw i tried sd2.1-base with setting config.train.adam_weight_decay = 0.0 and i find the loss to go down.

but do u know what results in the nan outcome at the first place?--> I just replaced sd 1.5 with stabilityai/sd-2.1 from huggingface, and changed the latent dimension from 64 to 96. As a result, the lora weights were not updated due to nan problem, so the image quality keep unchanged.

Great to hear that! Can you help try sd 2.1 as well? Because sd 2.1, the dimension changed from 512 to 768, so the per_gpu_capacity will also go down from 4 to 1, that will affect the lr.

SkylerZheng avatar Oct 17 '23 22:10 SkylerZheng

As a result, the lora weights were not updated due to nan problem, so the image quality keep unchanged.

I think having 64 as the latent dimension height width was causing the nan issue. Probably sd-2.1 should work after setting weight_decay to 0

Can you help try sd 2.1 as well?

I plan to try this after a week, will also try SD refiner then as i have neurips camera ready deadline. But i think sd-2.1-base is working, and i think the same strategy should work for sd-2.1. Let me know if it works for you.

mihirp1998 avatar Oct 17 '23 22:10 mihirp1998

I think having 64 as the latent dimension height width was causing the nan issue. Probably sd-2.1 should work after setting weight_decay to 0 I tried 96, nan issue was still not solved. I'm currently testing with 0 weight decay, hopefully it will work!

Thanks a lot for the help! I will keep you posted on this.

SkylerZheng avatar Oct 17 '23 22:10 SkylerZheng

sorry for hijacking this thread, but, when trying to adapt for SDXL, this occurs:

RuntimeError: mat1 and mat2 shapes cannot be multiplied (308x768 and 2048x640)

It seems that the LoRA implementation on SDXL is completely different too.

Xynonners avatar Oct 19 '23 07:10 Xynonners

got much further and now running into a negative tensor issue on the backprop...

Xynonners avatar Oct 19 '23 12:10 Xynonners

Thanks! if you are successful in integrating do please send a pull request. Would love to integrate it.

mihirp1998 avatar Oct 19 '23 20:10 mihirp1998

got much further and now running into a negative tensor issue on the backprop...

oddly, after a few hours of working on this the issue can be skirted by setting the latent dim to 64x64 or 96x96 rather than 128x128 (which causes the issue)...

EDIT: seems like the LoRA still isn't training, even though it says it's all fine.

Xerxemi avatar Oct 22 '23 04:10 Xerxemi

@mihirp1998, cool, thanks a lot! When I use accelerate, the training is just hung there, looks like data has not been loaded at all, so no training is happening. I used accelerate with dreambooth model training, it worked. It could be python3.10 and accelerate 0.17.0 are not compatible with my AWS EC2 env. Please let me know if you have any updates on sd 2.1! I tried to load stabilityai/stable-diffusion-2-1 for training, but the losses are nan, I printed the latents values, all are nan, but the evaluation worked fine, very weird, let me know if you have encountered the same problem!

In my experience losses end up at NaN when using float16, bfloat16 doesn't have this issue. I still have to check if lowering the latent dim causes NaN on SDXL.

EDIT: calling pipeline.upcast_vae() upcasts parts of the vae to float32, bypassing the issue.

Xerxemi avatar Oct 22 '23 04:10 Xerxemi