hgraph2graph
hgraph2graph copied to clipboard
Why are the inputs of HierVAE.decoder different from that of other models?
The decoder in HierVAE : self.decoder((root_vecs, root_vecs, root_vecs), graphs, tensors, orders), with only the vectors of roots are fed. But the inputs of the decoder in other models are something like: self.decoder((x_root_vecs, x_tree_vecs, x_graph_vecs), y_graphs, y_tensors, y_orders).
This is due to the difference between generative model and graph translation model. In VAE, the latent space must be a fix-sized vector, while in graph translation model, the input can be a sequence of vectors. The sequence of vectors are fed into decoder attention layer. Note that generative model does not have attention layers.
Hi. I'm quite confused that it have to be a fix-sized vector because Graph-VAE allows a sequence of vectors for each node in the graph. Could you please explain why generative model can only use a fix-sized vector?