guided-diffusion icon indicating copy to clipboard operation
guided-diffusion copied to clipboard

gradient of log classifing probability at \mu or at x_{t+1}?

Open JianjianSha opened this issue 1 year ago • 1 comments

Thanks very much for the source code. I find there is an inconsistency when calcaluating $p(x_t|x_{t+1},y)$: In paper, at the place after equation (6), gradient is $g=\nabla _ {x_t} \log p_{\phi}(y|x_t)| _ {x_t=\mu}$ but in code, it is

gradient = cond_fn(x, self._scale_timesteps(t), **model_kwargs)

where x represents $x_{t+1}$.

how if I replace x with p_mean_var['mean'](i.e. $\mu$)?

Thanks a lot!

JianjianSha avatar Jun 21 '23 03:06 JianjianSha

You have an interesting and precise question. Have you got the answer yet? I look forward to the answer.

dnkhanh45 avatar Sep 07 '23 04:09 dnkhanh45