HGAT icon indicating copy to clipboard operation
HGAT copied to clipboard

model

Open congcongzhang1996 opened this issue 4 years ago • 1 comments

Is the model in the code the same with the description in the paper? I have some doubt on the (models.py).

congcongzhang1996 avatar Nov 11 '20 12:11 congcongzhang1996

Have you understood the code now? It seems that in paper, type-level attention is calculated followed by node-level. But in code it looks quite the opposite. Also, in node-level attention, there is no concatenation operation like in paper. And even after applying softmax, there are extra steps, I don't understand what it has to do with the equations in the paper.

attention = F.softmax(attention, dim=1)
attention = torch.mul(attention, adj.sum(1).repeat(M, 1).t())
attention = torch.add(attention * self.gamma, adj.to_dense() * (1 - self.gamma))
h_prime = torch.matmul(attention, g)

abhinab303 avatar Nov 23 '20 09:11 abhinab303