DETM icon indicating copy to clipboard operation
DETM copied to clipboard

What exactly is NELBO and why do we optimize it?

Open legurp opened this issue 4 years ago • 3 comments

Can someone tell me why we optimize NELBO? In the paper it only said "We optimize the ELBO with respect to the variational parameters." As far as I understand it D-ETM consists of three neural networks to find the distributions for theta, eta and alpha and then estimates KL divergences for them. And then the KL divergence values are simply added together and optimized jointly? But why is NLL added? And I thought that "Solving this optimization problem is equivalent to maximizing the evidence lower bound (ELBO)" would mean that we don't minimize it as a loss which the model seems to do but rather maximize it.

Sorry, I am pretty confused (I am rather new to Bayesian statistics and variational inference)

legurp avatar Oct 12 '20 09:10 legurp

in detm.py in the forward() function it says: nelbo = nll + kl_alpha + kl_eta + kl_theta return nelbo, nll, kl_alpha, kl_eta, kl_theta

in main.py it says: loss, nll, kl_alpha, kl_eta, kl_theta = model(data_batch, normalized_data_batch, times_batch, train_rnn_inp, args.num_docs_train) loss.backward() optimizer.step()

legurp avatar Oct 12 '20 10:10 legurp

The following paper might be helpful: https://arxiv.org/abs/2002.07514

mona-timmermann avatar Oct 15 '20 07:10 mona-timmermann

Hi @legurp, NELBO is the "negative ELBO", and NLL should stand for "negative log-likelihood". Usually people state they are maximising ELBO, it's true, but since logs of probabilities give you a quantity <=0, it's often more convenient to multiply it by -1 (so that it becomes positive) and then minimise this new quantity (as a loss).

jfcann avatar Jan 29 '21 01:01 jfcann