SITTA icon indicating copy to clipboard operation
SITTA copied to clipboard

Question about the kl divergence loss

Open vincent1bt opened this issue 3 years ago • 0 comments

According to the paper, Kl divergence loss is computed between textures t_A, t_ba and ** t_B, t_ab**, however the computation:

loss_netG_A_texture = -0.5 * (F.kl_div(t_A, t_ba) + F.kl_div(t_ba, t_A))
loss_netG_B_texture = -0.5 * (F.kl_div(t_B, t_ab) + F.kl_div(t_ab, t_B))

Looks more like the JS divergence but with a negative sign.

Also the inputs for the Kl divergence loss are supposed to be in the log softmax space but the textures t_A, t_ba, t_B, t_ab come from the ReLU "space", is this the reason behind the negative sign in the equation? or am I losing some detail about the implementation?

vincent1bt avatar Feb 23 '22 02:02 vincent1bt