pyGAT
pyGAT copied to clipboard
code question
x = torch.cat([att(x, adj) for att in self.attentions], dim=1) is this code get atention coefficients, I can't understand how it works =.= Sorry, I'm a new person.
I think in this line of code, x is processed by mutil-attention. Each att(x,adj) calculate one head of attention score. And these heads are concated and sent to another layer whose output's dimension is equal to the kinds of label. In this case the layer can be simular as the mlp layer in tradition attention algorithm. You can search the "multi-attention" for more infomation.