texar-pytorch icon indicating copy to clipboard operation
texar-pytorch copied to clipboard

Query related to MLPTransformConnector in vae_text

Open varunkumar-dev opened this issue 5 years ago • 3 comments

Hi, Thanks for adding vae_text example. I have a question related to MLPTransformConnector. MLPTransformConnector expects an input of size self.encoder.cell.hidden_size*2. I don't understand why it should be multiplied by 2. Shouldn't we just use h of UnidirectionalRNNEncoder as input to MLPTransformConnector instead of (h, c)?

varunkumar-dev avatar Sep 27 '19 12:09 varunkumar-dev

Hi, thanks for the issue! Since the example is ported from Texar-TF, I just checked the original code and indeed only h is used. Sorry for this careless mistake, I guess this is because UnidirectionalRNNEncoder in Texar-TF by default doesn't return the cell state while in Texar-PyTorch it does.

However, I feel that having either h or both (h, c) as input doesn't really affect the outcome of the model, but I'm not very sure about this. Does your previous experience with VAEs show that one of the methods is better than the other?

huzecong avatar Sep 27 '19 16:09 huzecong

I don't think it will have a big impact on the performance but I think it's worth fixing because someone might use this implementation in a paper. And don't feel bad about it, you guys are doing an amazing job.

varunkumar-dev avatar Sep 27 '19 17:09 varunkumar-dev

Thanks for the issue. Using either h or (h, c) combined is legitimate (e.g., different papers implement in different ways). But it'd be good to make it clear in the doc, and probably add the option of using h.

ZhitingHu avatar Sep 27 '19 21:09 ZhitingHu