SITTA
SITTA copied to clipboard
Question about the kl divergence loss
According to the paper, Kl divergence loss is computed between textures t_A, t_ba and ** t_B, t_ab**, however the computation:
loss_netG_A_texture = -0.5 * (F.kl_div(t_A, t_ba) + F.kl_div(t_ba, t_A))
loss_netG_B_texture = -0.5 * (F.kl_div(t_B, t_ab) + F.kl_div(t_ab, t_B))
Looks more like the JS divergence but with a negative sign.
Also the inputs for the Kl divergence loss are supposed to be in the log softmax space but the textures t_A, t_ba, t_B, t_ab come from the ReLU "space", is this the reason behind the negative sign in the equation? or am I losing some detail about the implementation?