guided-diffusion
guided-diffusion copied to clipboard
generated images are noisy
Hi, there is a work based guided-diffusion: https://github.com/WeilunWang/semantic-diffusion-model which implements a semantic diffusion model. But when I try to do sampling, the quality is unsatisfactory. I receive no response by asking its authors, so I have to go to this repository to find answers (reasons may be related to theories of diffusion model).
Here are some generated images on human faces: https://github.com/WeilunWang/semantic-diffusion-model/issues/14
and more on ade20k and cityscapes:
ddpm sampling
ddim sampling
ddpm sampling
ddim sampling
Is there any way to improve the quality of ddim sampling. I tried to set 'guidance scale', but it doesn't help. I don't know what causes noise in the generated images. Any tips?
Hello! I want to ask how much the loss of the model you trained converges to? The images generated by my training are all noise, and I can't see the content of the images at all.
Hello! Have you solved this problem? I am in the same situation.
sorry,but I still struggle with it. if you have some solutions, please tell me
在 2023-07-11 15:36:47,"kuanglongli" @.***> 写道:
你好! 我想问一下你训练的模型的损失收敛到多少?我训练产生的图像都是噪音,我根本看不到图像的内容。
Hello! Have you solved this problem? I am also trained to generate pictures that are all noise.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>
Could you please solve this problem? I also have this problem.
I solved it by reducing the learning rate to 2e-5
I solved it by reducing the learning rate to 2e-5
Thank you, I think this is a good way.
Hi, there is a work based guided-diffusion: https://github.com/WeilunWang/semantic-diffusion-model which implements a semantic diffusion model. But when I try to do sampling, the quality is unsatisfactory. I receive no response by asking its authors, so I have to go to this repository to find answers (reasons may be related to theories of diffusion model).
Here are some generated images on human faces: WeilunWang/semantic-diffusion-model#14
and more on ade20k and cityscapes:
ddpm sampling
ddim sampling
ddpm sampling
ddim sampling
Is there any way to improve the quality of ddim sampling. I tried to set 'guidance scale', but it doesn't help. I don't know what causes noise in the generated images. Any tips?
Hi, could you help me please ,I also tried the code of https://github.com/WeilunWang/semantic-diffusion-model I want to train it for image-to-image translation and not for segmentation. How can I set the number of classes and this condition ? : --class_cond True , and thank you.