guided-diffusion
guided-diffusion copied to clipboard
gradient of log classifing probability at \mu or at x_{t+1}?
Thanks very much for the source code. I find there is an inconsistency when calcaluating $p(x_t|x_{t+1},y)$: In paper, at the place after equation (6), gradient is $g=\nabla _ {x_t} \log p_{\phi}(y|x_t)| _ {x_t=\mu}$ but in code, it is
gradient = cond_fn(x, self._scale_timesteps(t), **model_kwargs)
where x
represents $x_{t+1}$.
how if I replace x
with p_mean_var['mean']
(i.e. $\mu$)?
Thanks a lot!
You have an interesting and precise question. Have you got the answer yet? I look forward to the answer.