graph4nlp icon indicating copy to clipboard operation
graph4nlp copied to clipboard

Does having more edges on a GNN helps learning?

Open smith-co opened this issue 3 years ago • 1 comments

I am using Graph2Seq model for a NMT task. I use GCN as the encoder.

Graph stats:

  • I have around 400 nodes in the graph per data point.
  • In the current graph, on average, a node is connected to 3.5 neighbours.
  • And I have more semantic edges to add between nodes.

I have following two questions:

  1. Logically does adding more edges help a GNN model?

  2. I guess adding more semantic edges might help the model to learn about the neighbours faster and could reduce training time as the model could converge faster. Is this understanding correct?

smith-co avatar Aug 06 '22 08:08 smith-co

Thanks for your interest. The following are some personal insights. For Q1: Yes. The edges contain more information that can usually help the down tasks. For Q2: I think the coverage speed is not guaranteed. When the graph is simple, it still coverage fast. And adding the edges will bring additional computation costs, which can reduce the training speed in some way. So I suggest you compare the different varients of the graph to find the answer for the specific task.

AlanSwift avatar Aug 06 '22 10:08 AlanSwift

Thanks for the feedback.

For a large graph, would adding edges reduce the converge speed?

smith-co avatar Sep 03 '22 21:09 smith-co

I think it depends on your specific design. Adding useful edges will certainly help message passing. But if you add negative edges, I guess it harms the performance.

AlanSwift avatar Sep 18 '22 18:09 AlanSwift