DenoisingDiffusionProbabilityModel-ddpm-
DenoisingDiffusionProbabilityModel-ddpm- copied to clipboard
Question about variance calculation in sampling step
In Diffusion.py
line 66, posterior_var
is calculated. Why do you use torch.cat([self.posterior_var[1:2], self.betas[1:]])
in line 77 to extract variance instead of posterior_var
.
def p_mean_variance(self, x_t, t):
# below: only log_variance is used in the KL computations
var = torch.cat([self.posterior_var[1:2], self.betas[1:]]) # i think var should be equal to self.posterior_var
var = extract(var, t, x_t.shape)
eps = self.model(x_t, t)
xt_prev_mean = self.predict_xt_prev_mean_from_eps(x_t, t, eps=eps)
return xt_prev_mean, var
I have the same doubt, what's the reason for doing this ?
俺也有这个疑问,方差不是应该是self.posterior_var吗
Have you solved the problem? Can you fill me in?
@zoubohao Could you answer this issue? Thank you!
In the paper DDPM, it seems that var is equal to posterior_var or betas. And I don't know why they were concatenated here?