score_sde_pytorch
score_sde_pytorch copied to clipboard
Initialisation of pc_inpainting
My question concerns this line:
# Initial sample
x = data * mask + sde.prior_sampling(data.shape).to(data.device) * (1. - mask)
Let's assume data is normalised to have approx std=1. In this case, we're initialising x as a tensor that has some parts with std=1 and some parts with std=prior_std, which is certainly out of distribution for the score network. Wouldn't it make more sense to initialise it similarly to the body of inpaint_update_fn?
vec_t = torch.ones(data.shape[0], device=data.device) * timesteps[0]
masked_data_mean, std = sde.marginal_prob(data, vec_t)
masked_data = masked_data_mean + torch.randn_like(data) * std[:, None, None, None]
x = masked_data * mask + sde.prior_sampling(data.shape).to(data.device) * (1. - mask)
I have tried the modification and visually I can't tell if one is significantly better than the other, but I imagine a more thorough benchmarking could reveal differences in FID.
Original algortithm:

My modification:
