stable-diffusion
stable-diffusion copied to clipboard
about the classifier-free guidance sampling code ?
in the paper , the latex is
so i think the code
def get_model_output(x, t):
if unconditional_conditioning is None or unconditional_guidance_scale == 1.:
e_t = self.model.apply_model(x, t, c)
else:
x_in = torch.cat([x] * 2)
t_in = torch.cat([t] * 2)
c_in = torch.cat([unconditional_conditioning, c])
e_t_uncond, e_t = self.model.apply_model(x_in, t_in, c_in).chunk(2)
e_t = e_t_uncond + unconditional_guidance_scale * (e_t - e_t_uncond)
if score_corrector is not None:
assert self.model.parameterization == "eps"
e_t = score_corrector.modify_score(self.model, e_t, x, t, c, **corrector_kwargs)
return e_t
from plms.py line 179
e_t = e_t_uncond + unconditional_guidance_scale * (e_t - e_t_uncond)
from my point , it should be
e_t = e_t + unconditional_guidance_scale * (e_t - e_t_uncond)
can you tell me why? @apolinario @asanakoy @pesser @patrickvonplaten @rromb
GLIDE has the same formulation as LDM. I wonder why
If you figured out the reason, I would appreciate it if you can share it with me.
I also hope I can find out why, and I will continue to search for answers, let's work together!
@JaosonMa Any updates on this!? I still couldn't figure out the reason behind the modification.
Have you found out the reason? I'm also wondering why
sorry,i have not config this out
You should see the DDPM paper
e_t = e_t + unconditional_guidance_scale * (e_t - e_t_uncond)
Makes the e_t one size bigger, which is not the case.