StableSR icon indicating copy to clipboard operation
StableSR copied to clipboard

Replication issue

Open ITBeyond1230 opened this issue 2 years ago • 11 comments

Thank you for sharing the code. And I try to train the model from scratch following your train script and config, everything is same except the DIV8k dataset(i don't have DIV8k). By the time I tested it, the model has been trained for 12000 steps( vs your 16500 steps).

The train script is:

python main.py --train --base configs/stableSRNew/v2-finetune_text_T_512.yaml --gpus 0,1,2,3,4,5,6,7 --name StableSR_Replicate --scale_lr False

The test scripts is:

python scripts/sr_val_ddpm_text_T_vqganfin_old.py --config configs/stableSRNew/v2-finetune_text_T_512.yaml --ckpt CKPT_PATH --vqgan_ckpt VQGANCKPT_PATH --init-img INPUT_PATH --outdir OUT_DIR --ddpm_steps 200 --dec_w 0.0 --colorfix_type adain

the input image is : OST_120

the model results i trained: OST_120

your pretrained model results: OST_120 (1)

What makes the difference? Is it training steps or DIV8K dataset? Or other something?

ITBeyond1230 avatar Jun 01 '23 02:06 ITBeyond1230

It is hard to say. For training, usually the longer, the better. After all, the official LDM seems to be trained for about 2.6M iterations with 256 batch size. The performance between different checkpoints can also be different.

IceClear avatar Jun 01 '23 03:06 IceClear

@IceClear Thanks for your quick response, I will try to train more steps and then check results. Also, besides longer training steps, what are the key factors that help us get a good model?

ITBeyond1230 avatar Jun 01 '23 03:06 ITBeyond1230

It is hard to say. For training, usually the longer, the better. After all, LDM is trained for 2.6M iterations with 256 batch size. The performance between different checkpoints can also be different.

"The performance between different checkpoints can also be different", so why not consider to use the EMA strategy in your practice? LDM seems to use EMA.

ITBeyond1230 avatar Jun 01 '23 03:06 ITBeyond1230

I guess longer training and more data should help.

IceClear avatar Jun 01 '23 03:06 IceClear

It is hard to say. For training, usually the longer, the better. After all, LDM is trained for 2.6M iterations with 256 batch size. The performance between different checkpoints can also be different.

"The performance between different checkpoints can also be different", so why not consider to use the EMA strategy in your practice? LDM seems to use EMA.

I remember that the code uses EMA already?Since we only tune a very small portion of the parameters, I am not sure how much gain can be obtained.

IceClear avatar Jun 01 '23 04:06 IceClear

It is hard to say. For training, usually the longer, the better. After all, LDM is trained for 2.6M iterations with 256 batch size. The performance between different checkpoints can also be different.

"The performance between different checkpoints can also be different", so why not consider to use the EMA strategy in your practice? LDM seems to use EMA.

I remember that the code uses EMA already?Since we only tune a very small portion of the parameters, I am not sure how much gain can be obtained.

In config, the use_ema is set to False. Is that means ema is not used in training and test? image

ITBeyond1230 avatar Jun 01 '23 04:06 ITBeyond1230

It is hard to say. For training, usually the longer, the better. After all, LDM is trained for 2.6M iterations with 256 batch size. The performance between different checkpoints can also be different.

"The performance between different checkpoints can also be different", so why not consider to use the EMA strategy in your practice? LDM seems to use EMA.

I remember that the code uses EMA already?Since we only tune a very small portion of the parameters, I am not sure how much gain can be obtained.

In config, the use_ema is set to False. Is that means ema is not used in training and test? image

Oh, my bad. I think I did not add ema support for the training on Stable Diffusion v2. You may have a try if you are interested.

IceClear avatar Jun 01 '23 05:06 IceClear

@IceClear Thanks for your quick response, I will try to train more steps and then check results. Also, besides longer training steps, what are the key factors that help us get a good model?

Hi @ITBeyond1230, I think I have the same problem as you. Did you get better results for the first fine-tuning stage?

xyIsHere avatar Jul 05 '23 03:07 xyIsHere

@ITBeyond1230 @xyIsHere I seem to be having the same problem, have you guys had any good results?

q935970314 avatar Jul 31 '23 03:07 q935970314

@ITBeyond1230 @xyIsHere @q935970314 I also seem to be having the same problem, have you guys had any good results? Following the settings in the code, same config, same dataset, same GPU, and I have carefully chosen the trained ckpt and tested all checkpoints, but it is still worse than the public stablesr_000117.ckpt. And I tried training longer but it didn't work, but more blurry! So using ema is work??

xiezheng-cs avatar Apr 23 '24 15:04 xiezheng-cs

@ITBeyond1230 training from scratch does it mean you don't use stable diffusion pretrained weight?

tuvovan avatar Feb 09 '25 09:02 tuvovan