MMSA icon indicating copy to clipboard operation
MMSA copied to clipboard

Question for Forward lld (gaussian prior) and entropy estimation in MMILB Module

Open yangmiemiemie1 opened this issue 2 years ago • 1 comments

positive = -(mu - y)**2/2./torch.exp(logvar) Is "positive" vector (above in line 152) for the p(y|x) ~ N(y|µθ1(x), σ2 (x) I)? where is the -(lnσ + C) items in the probability density function for Normal distribution ?Why is it missing?

yangmiemiemie1 avatar Nov 01 '23 11:11 yangmiemiemie1

Hi, This code is directly copied from the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Information Maximization for Multimodal Sentiment Analysis, Please refer to the official implement here.

https://github.com/declare-lab/Multimodal-Infomax/blob/cd0774c5a712ca5f1a5497dbf27dde11cade7434/src/modules/encoders.py#L152

We also find some other errors in symbols (including the Eq. (3) $I(X;Y) =E_{p(x,y)} [\log \frac{q(y|x)}{p(y)} ]+ E_{p(\mathbf{y})}[KL(p(y|x) || q(y|x))]$ Should be modified as $I(X;Y) =E_{p(x,y)} [\log \frac{q(y|x)}{p(y)} ]+ E_{p(\mathbf{x})}[KL(p(y|x) || q(y|x))]$ . But for your question, the "-(lnσ + C) items"

  1. C term is not matter due to the existence of the weighted loss coefficient.
  2. -lnσ term here is probably ignored, as it contributes less to the model performance I think. (but I am not sure.)

(If you want to further discuss the problem, you can find me through email: [email protected])

Columbine21 avatar Dec 19 '23 02:12 Columbine21