DDPM-Pytorch
DDPM-Pytorch copied to clipboard
This repo implements Denoising Diffusion Probabilistic Models (DDPM) in Pytorch
Hi Sir, I'm working on a LDM for my thesis and your video was very helpful in figuring out how the DDPM works. I only have a doubt in the...
Thanks for the awesome explanation. Could you tell me which changes we need before training the model on our data?
Thx for nice practicing about DM. Actually, I'm really curious about why does not use 'Positional Encoding' (which was used in ViT or VanillaTransformer.. etc..) in self-attention layers? Is that...
I am running this code on set of images but getting thisu error " CUDA out of memory. Tried to allocate 150.06 GiB (GPU 0; 15.89 GiB total capacity; 720.18...
Sorry to bother the author again and again, I would like to ask a few additional questions: 1. How much video memory should I use with your model for my...
Hi author, when I trained my model using coloured datasets, I found that the memory required for training was too large, after adding multiple GPUs I switched to a data-parallel...
`self.t_emb_layers = nn.ModuleList([ nn.Sequential( nn.SiLU(), nn.Linear(t_emb_dim, out_channels) ) for _ in range(num_layers) ])` shouldn't it be following instead `self.t_emb_layers = nn.ModuleList([ nn.Sequential( nn.Linear(t_emb_dim, out_channels), nn.SiLU(), nn.Linear(out_channels, out_channels) ) for _...
``` noise = torch.randn_like(im).to(device) t = torch.full((im.shape[0],), diffusion_config['num_timesteps']-1, device=device) #t = torch.randint(0, diffusion_config['num_timesteps'], (im.shape[0],)).to(device) xt = scheduler.add_noise(im, noise, t) for i in tqdm(reversed(range(diffusion_config['num_timesteps']))): # Get prediction of noise noise_pred =...
Can I remove the attention layer for high resolution img?