neural_graph_collaborative_filtering icon indicating copy to clipboard operation
neural_graph_collaborative_filtering copied to clipboard

The embedding propagation code seems to be not consistent with the paper

Open Dousia opened this issue 4 years ago • 1 comments
trafficstars

temp_embed = []
for f in range(self.n_fold):
temp_embed.append(tf.sparse_tensor_dense_matmul(A_fold_hat[f], ego_embeddings)) side_embeddings = tf.concat(temp_embed, 0)
sum_embeddings = tf.nn.leaky_relu(tf.matmul(side_embeddings, self.weights['W_gc_%d' % k]) + self.weights['b_gc_%d' % k]) bi_embeddings = tf.multiply(ego_embeddings, side_embeddings) bi_embeddings = tf.nn.leaky_relu(tf.matmul(bi_embeddings, self.weights['W_bi_%d' % k]) + self.weights['b_bi_%d' % k]) ego_embeddings = sum_embeddings + bi_embeddings

In the code above, sum_embeddings and bi_embeddings are both calculated with side_embeddings (L*E). According to the paper, however, sum_embeddings are calculated with side_embeddings ((L+I)E) and bi_embeddings are calculated with another side_embeddings (L*E).

Could you please explain why?

Dousia avatar Jun 05 '21 07:06 Dousia

I agree with you, and also I think the way they normalized the adj_matrix in the code (using mean) is not consistent with the D-1/2 A D-1/2.

GuoshenLi avatar Jan 20 '22 16:01 GuoshenLi