GRAPE icon indicating copy to clipboard operation
GRAPE copied to clipboard

About the edge embedding

Open Abraham12580 opened this issue 4 years ago • 8 comments

As the paper writes, the edge values are either continuous or discrete. But how are the edge values transfered into edge embeddings? what does the edge embedding look like? I dont see this clearly in the paper. Hoping for the authors kindly reply.

Abraham12580 avatar Feb 16 '21 02:02 Abraham12580

image

Abraham12580 avatar Feb 16 '21 03:02 Abraham12580

The initial edge embedding is the known attribute values from the data matrix, it is updated by equation 3 in section 3.3 in the paper.

maxiaoba avatar Feb 18 '21 23:02 maxiaoba

Thanks for your reply. I have seen your words in your papar. I'm sorry that I haven't express myself very well. Actually it is the form of edge embedding that puzzles me. In other words, is the edge embedding a real number from data matrix or a vector transformed from data matrix? From Figure 1 in the paper, I think the edge embedding of the edge O1-F1 is a real number, which is 0.3. But I think real number cannot deliver the complex information between sample nodes and feature nodes. If the edge embedding is a vector, I don't kown how it is transformed from a single real number, like 0.3. 

------------------ 原始邮件 ------------------ 发件人: "maxiaoba/GRAPE" <[email protected]>; 发送时间: 2021年2月19日(星期五) 上午7:33 收件人: "maxiaoba/GRAPE"<[email protected]>; 抄送: "兵哥哥"<[email protected]>;"Author"<[email protected]>; 主题: Re: [maxiaoba/GRAPE] About the edge embedding (#5)

The initial edge embedding is the known attribute values from the data matrix, it is updated by equation 3 in section 3.3 in the paper.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe.

Abraham12580 avatar Feb 21 '21 02:02 Abraham12580

The initial edge embedding is a real number from the data matrix, and this is enough because it contains all we know from the original data matrix. Then at the following layers, it is updated to a vector by equation 3 in section 3.3 in the paper to encode the additional statistical information from message passing.

maxiaoba avatar Mar 11 '21 18:03 maxiaoba

Thanks for this great repository ! I had a question regarding equation (3).

@maxiaoba, @dingdaisy, @JiaxuanYou - in equation (3) - I see that you use h_{v}^{l-1} - which is the previous layer embedding of the node (v) that you want to update.

So if I understand correctly, to update Node V we look at the Embedding of Node V and the connecting edge Euv.

In the code, on line 64 of egsage.py, I see this concatenation

m_j = torch.cat((x_j, edge_attr),dim=-1)            
m_j = self.message_activation(self.message_lin(m_j)) 

From what I understand, x_j is the embedding of the neighbouring node (and not the node itself). If this is true, I see a conflict in the equation and in the code.

Can you help me to better understand this ?

I am on a tight deadline - I would really appreciate any help. Thanks so much !

Alex-Mathai-98 avatar May 25 '21 08:05 Alex-Mathai-98

@maxiaoba I have a question about the edge embedding. I note the different way to deal with discrete and continuous features. The discrete will be transformed into one-hot vector, while continuous feature will keep its original value. From my understanding, the transformed edge features are not aligned well. The transformed discrete feature is a vector, while the continuous feature is a scalar. However, both node and edge features will be stored in a tensor. Could you please help me on this issue? It'd be better to point out where we can find the piece of code to deal with this problem.

voladorlu avatar Sep 27 '21 08:09 voladorlu