DDPM-Pytorch
DDPM-Pytorch copied to clipboard
Question on the training processo of a Diffusion Model
Hi Sir, I'm working on a LDM for my thesis and your video was very helpful in figuring out how the DDPM works. I only have a doubt in the training process, right now I'm:
- Sample a Batch of images and related caption
- Pass the images trough the Encoder of che diffusion model (to obtain the latent) and the caption trough the clip encoder
- Sample a random T and add noise to the latent with the scheduler
- Pass the latent in the Unet obtaining the predicted noise
- Calculate the loss between real Noise and predicted Noise
My doubt is, is it all i have to do? During the training process i don't have to do all the steps during forward and reverse project, but i can only limit to the single t i randomly sample?
Hello @danielemolino , Yes you are correct. Assume your batch size is 4, so during training for each batch you would do the following:
- Sample 4 timesteps(from 0 to 1000)
- Pass these 4 sampled timesteps and the 4 latent images to the scheduler which will return the 4 noisy images(based on the timesteps
- The Unet would then predict the noise for these 4 images and your loss would be the MSE between predicted and actual.
Ok, thank you so much for the fast answer. Another doubt I'm facing is how to handle the two types of possible parametrization: Eps and X0. I understand how to first one works, but I have some doubts on the second: the only difference is that before computing the loss I just reconstruct the original image with the predicted noise and then compute the loss between real and reconstructed image?
Yes, but in this repo I have only used eps variation. You can take a look at the huggingface library to get better understanding of both variations. Its same as what you mentioned but I thought its better if you have an implementation to look at as well.
I am attaching the blocks of code for both. During training how both parameterization varies in the loss , you can see it here - https://github.com/huggingface/diffusers/blob/baab065679b616c2a4da2abcb83c0c2764291256/examples/unconditional_image_generation/train_unconditional.py#L568-L577
During inference , the reverse diffusion step method(going from t to t-1), that you can take a look at here - https://github.com/huggingface/diffusers/blob/baab065679b616c2a4da2abcb83c0c2764291256/src/diffusers/schedulers/scheduling_ddpm.py#L446-L475
Hope this helps
Thank you so much!