hgraph2graph icon indicating copy to clipboard operation
hgraph2graph copied to clipboard

Why are the inputs of HierVAE.decoder different from that of other models?

Open WhatAShot opened this issue 4 years ago • 2 comments

The decoder in HierVAE : self.decoder((root_vecs, root_vecs, root_vecs), graphs, tensors, orders), with only the vectors of roots are fed. But the inputs of the decoder in other models are something like: self.decoder((x_root_vecs, x_tree_vecs, x_graph_vecs), y_graphs, y_tensors, y_orders).

WhatAShot avatar Jul 17 '20 08:07 WhatAShot

This is due to the difference between generative model and graph translation model. In VAE, the latent space must be a fix-sized vector, while in graph translation model, the input can be a sequence of vectors. The sequence of vectors are fed into decoder attention layer. Note that generative model does not have attention layers.

wengong-jin avatar Jul 21 '20 19:07 wengong-jin

Hi. I'm quite confused that it have to be a fix-sized vector because Graph-VAE allows a sequence of vectors for each node in the graph. Could you please explain why generative model can only use a fix-sized vector?

KatarinaYuan avatar Sep 24 '21 11:09 KatarinaYuan