Graph-MLP icon indicating copy to clipboard operation
Graph-MLP copied to clipboard

How to keep robust when still use the adjacency information implicitly

Open alvinsun724 opened this issue 4 years ago • 2 comments

Hi, your work is really inspiring and I have one question.

In the paper, you were saying the model would be more robust when facing large-scale graph data and corrupted adjacency information, as it utilizes the adjacency information implicitly, rather like GCN which uses adjacency information directly during the information aggregation phase.

However, I am wondering you still use the adjacency information (even multiply 4 times is possible: 4th power of adj) in calculation Ncontrast Loss, how would this maintain robust performance with massive corrupted adjacency information, given you still need the adjacency information in Ncontrast loss in training?

Is that becuase you only need adjacency information during training rather than both train and test phase? Or some other reason to justify?

I am really confused about that and look forward to your reply.

Thanks a lot

alvinsun724 avatar Nov 24 '21 15:11 alvinsun724

Hi,I think this issue helps: https://github.com/yanghu819/Graph-MLP/issues/5.

yanghu819 avatar Nov 26 '21 09:11 yanghu819

Hi,I think this issue helps: #5.

Thanks. Is my understanding correct that difference between the performance of the Graph-MLP and GCN, is mainly due to no adjacency information utilized in the test phase of Graph-MLP?

alvinsun724 avatar Nov 29 '21 13:11 alvinsun724