Diffusion-Models-pytorch
Diffusion-Models-pytorch copied to clipboard
Training generates images with full red output
While training on not changed model with a different dataset (portraits of faces) I am getting bunch of full red outputs:
I also changed the code to train on the same dataset but greyscaled before training and as a result I still get monocolored outputs but this time they are either white or black:
Has anyone had the same issue? Is there something I can do to prevent this?
Hey, can you try to train on the original dataset I used and tell me if you get the same results or if this training also does not work.
These are the results of training on landsacapes dataset. What I changed is the batch size. My GPU has only 4 GB so it has to be only 2.

Interesting, I opened a similar issue for the following repository https://github.com/cloneofsimo/minDiffusion/issues/4