stable-diffusion
stable-diffusion copied to clipboard
Why should lvlb_weights be implemented like this when parameterized as "x0"
I don't understand why this is implemented like this:
if self.parameterization == "eps":
lvlb_weights = self.betas ** 2 / (
2 * self.posterior_variance * to_torch(alphas) * (1 - self.alphas_cumprod))
elif self.parameterization == "x0":
lvlb_weights = 0.5 * np.sqrt(torch.Tensor(alphas_cumprod)) / (2. * 1 - torch.Tensor(alphas_cumprod)) # Confusing
else:
raise NotImplementedError("mu not supported")
I think it should be implemented like this:
I want to add parameterization as "v", how can I modify the code, divide by alpha squared, or whatever
I have the same problem, may I know if it has been resolved?
I have the same problem, may I know if it has been resolved?
Sorry, I haven't solved this problem yet, wait for the someone to explain
You can find the v parameterization here @HWH-2019 https://github.com/Stability-AI/stablediffusion/blob/main/ldm/models/diffusion/ddpm.py#L896
I even don't know why lvlb_weight should be written like this,. Could you please tell me why? Maybe a link or a paper which describes lvlb equation will be helpful. Thanks a lot!
any one can explain lvlb_weight? thanks
This refers to equation(12) in https://arxiv.org/abs/2006.11239v2.