how_attentive_are_gats
how_attentive_are_gats copied to clipboard
Code for the paper "How Attentive are Graph Attention Networks?" (ICLR'2022)
The performance of GAT in origin paper is outperform than GraphSage in PPI. Why not you compare GATv2 on PPI?
Thanks for your great contribution!! I'm confused about Figure 1 (a) in your paper. Which layer of GAT is this attention matrix in? Is the attention matrix of all layers...
In this repo,there is no method to draw Figure 1 in this paper.
Hello, may I ask if you could provide the specific parameter settings for reproducing the test results on the PubMed dataset in the paper?