ETM icon indicating copy to clipboard operation
ETM copied to clipboard

recon_loss

Open yilunzhao opened this issue 5 years ago • 4 comments

Hi, I cannot understand the expression "recon_loss = -(preds * bows).sum(1)“ in etm.py forward() function. Could you help me explain it? The loss function seems to be different from the equation defined in the paper. Thanks!

yilunzhao avatar Apr 21 '20 12:04 yilunzhao

It's cross entropy (i.e. neg. log likelihood of multinomial where categories are words), where preds are already log transformed in the decode function. It's implicitly given in the first part of Equation 7, but not explicitly. It is also hidden in the Estimate the ELBO and its gradient (backprop.) part of Algorithm 1.

What puzzles me a bit is that at the end of Algorithm 1, variational parameters and model parameters are updated separately but in the implementation they are updated jointly via regular backprop.

gokceneraslan avatar Apr 21 '20 14:04 gokceneraslan

Yeah, thanks for your help! I didn't notice it before. What's difference between updating jointly and seperately via regular backprop?

yilunzhao avatar Apr 22 '20 01:04 yilunzhao

Hi, I cannot understand the expression "recon_loss = -(preds * bows).sum(1)“ in etm.py forward() function. Could you help me explain it? The loss function seems to be different from the equation defined in the paper. Thanks!

recon_loss is the expectation of P(d|\parameters), the first term in eq.7. But I'm curious about how the second term is computed. Why the KL equals to "-0.5 * torch.sum(1 + logsigma_theta - mu_theta.pow(2) - logsigma_theta.exp(), dim=-1).mean()"

NonBee98 avatar Apr 22 '20 10:04 NonBee98

Hi, I cannot understand the expression "recon_loss = -(preds * bows).sum(1)“ in etm.py forward() function. Could you help me explain it? The loss function seems to be different from the equation defined in the paper. Thanks!

recon_loss is the expectation of P(d|\parameters), the first term in eq.7. But I'm curious about how the second term is computed. Why the KL equals to "-0.5 * torch.sum(1 + logsigma_theta - mu_theta.pow(2) - logsigma_theta.exp(), dim=-1).mean()"

I've met the same prolbem. Have u solved it yet?

neilwen987 avatar Aug 10 '22 07:08 neilwen987