latent-diffusion
latent-diffusion copied to clipboard
Is anyone able to reproduce the ldm performance (FID) on ffhq / celebA with multiple GPU?
If I use scale_lr True
then the model won't converge. If I use scale_lr False
then the FID is terrible (~40)
Same here, only one GPU can be used. Join me to contact if you are interested. Wechat: gong_ke_nv
@zhangqizky @Ir1d Do you know why one GPU works and multiple GPU can not work? How to make multiple GPUs to work?
您的邮件已收到,我会尽快回复,谢谢。张琦
you need to make sure the total lr is the same as 1gpu. but in that case my fid is still larger than reported
@Ir1d when i used multiple GPUs, I set --scale_lr False to enforce the total lr is the same as 1gpu. However, training using multiple GPUs get bad result. When I use one GPU without changing anything, the result get much better. It seems that trainiging using multiple GPUs has some problems, and training using one GPUs will NOT have those problems.
@Ir1d How about the performance for training set? Is it better than or similar to the testing results?
您的邮件已收到,我会尽快回复,谢谢。张琦
@zhangdan8962 unfortunately I still can't reproduce the FID reported in paper
@zhangdan8962 unfortunately I still can't reproduce the FID reported in paper
Did you run the FID on testing data?
Is anyone set --scale_lr False and then multiply lr by an GPU_NUMs factor? I don't know if this works, I'm trying.
您的邮件已收到,我会尽快回复,谢谢。张琦
Hello, I would like to ask the difference between unconditional LDM and conditional LDM. After the model is trained, is unconditional sampling generate image randomly, but not based on a given image? So, if I want to generate a normal image from a flawed image (without any annotations in the inference phase), should I use conditional LDM? @Ir1d @zhangqizky @lostnighter @ader47 @zhangdan8962
Hello, I would like to ask the difference between unconditional LDM and conditional LDM. After the model is trained, is unconditional sampling generate image randomly, but not based on a given image? So, if I want to generate a normal image from a flawed image (without any annotations in the inference phase), should I use conditional LDM? @Ir1d @zhangqizky @lostnighter @ader47 @zhangdan8962
I think you should use conditional LDM, img2img
Hello, I would like to ask the difference between unconditional LDM and conditional LDM. After the model is trained, is unconditional sampling generate image randomly, but not based on a given image? So, if I want to generate a normal image from a flawed image (without any annotations in the inference phase), should I use conditional LDM? @Ir1d @zhangqizky @lostnighter @ader47 @zhangdan8962
I think unconditional LDM is also fine. You could refer ddpm.py#L1324. Adding masks might help with that. And FYI this method is based on the SDEdit.
@Ir1d how long did it take to train on one GPU and what was the size of your dataset?
您的邮件已收到,我会尽快回复,谢谢。张琦