MuGNN
MuGNN copied to clipboard
About adj matrix
Hi,
I am looking into your code. But it seems that in models.py
, the self.multi_head_att_layers
(self-attention) and self.relation_attention_gcns
(cross-KG attention) use the same adjacency matrix, rather than different adj matrix for each channel. Is there anything wrong with my understanding?
Hi,
These two models use adjs with the same connectivity. But the edge weights are calculated by KG Self-Attention and Cross-KG Attention module separately in two channels.
Okay, thanks for your reply!
By the way, could you please tell me how do you generate files of relation seed? It seems that it does not exist in the original dataset.