pygcn icon indicating copy to clipboard operation
pygcn copied to clipboard

What is different from the original code?

Open hkthirano opened this issue 7 years ago • 5 comments

As I trained, the result of this repository is more accurate than the original paper (SEMI-SUPERVISED CLASSIFICATION WITH GRAPH CONVOLUTIONAL NETWORKS) in cora dateset.

What is different from the original code?

hkthirano avatar Sep 25 '18 09:09 hkthirano

The data splits are different, the normalization of the adjacency matrix is slightly different and there is no dropout on the first layer.

On Tue 25. Sep 2018 at 10:02 hokuto_HIRANO [email protected] wrote:

As I trained, the result of this repository is more accurate than the original paper (SEMI-SUPERVISED CLASSIFICATION WITH GRAPH CONVOLUTIONAL NETWORKS) in cora dateset.

What is different from the original code?

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/tkipf/pygcn/issues/20, or mute the thread https://github.com/notifications/unsubscribe-auth/AHAcYEuO_TElTlJUjoBe3gc4rS13e0dtks5uefE3gaJpZM4W4Kyk .

tkipf avatar Sep 25 '18 09:09 tkipf

I have been trying to understand what you mean by 'there is no dropout on the first layer' and 'sparse dropout'.

def forward(self, x, adj):
        x = F.relu(self.gc1(x, adj))
--->    x = F.dropout(x, self.dropout, training=self.training)
        x = self.gc2(x, adj)
        return F.log_softmax(x, dim=1)

^ I am assuming this to be the dropout on first layer. Please let me know what I am missing.

Thanks!

kkteru avatar Feb 02 '19 23:02 kkteru

Yes, this is correct- my wording was a bit ambiguous: the original TensorFlow-based implementation also uses Dropout on the input features directly (which is what I meant by first layer).

On Sun, Feb 3, 2019 at 12:34 AM Komal Kumar [email protected] wrote:

I have been trying to understand what you mean by 'there is no dropout on the first layer' and 'sparse dropout'.

def forward(self, x, adj): x = F.relu(self.gc1(x, adj)) ---> x = F.dropout(x, self.dropout, training=self.training) x = self.gc2(x, adj) return F.log_softmax(x, dim=1)

^ I am assuming this to be the dropout on first layer. Please let me know what I am missing.

Thanks!

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/tkipf/pygcn/issues/20#issuecomment-460008872, or mute the thread https://github.com/notifications/unsubscribe-auth/AHAcYDxYVubyD_N1o8CNAPEAtZN1hJOzks5vJiCggaJpZM4W4Kyk .

tkipf avatar Feb 03 '19 12:02 tkipf

I see. Thanks for the clarification!

kkteru avatar Feb 03 '19 18:02 kkteru

The data splits are different, the normalization of the adjacency matrix is slightly different and there is no dropout on the first layer. On Tue 25. Sep 2018 at 10:02 hokuto_HIRANO @.***> wrote: As I trained, the result of this repository is more accurate than the original paper (SEMI-SUPERVISED CLASSIFICATION WITH GRAPH CONVOLUTIONAL NETWORKS) in cora dateset. What is different from the original code? — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub <#20>, or mute the thread https://github.com/notifications/unsubscribe-auth/AHAcYEuO_TElTlJUjoBe3gc4rS13e0dtks5uefE3gaJpZM4W4Kyk .

Hi, this is some very clean code. Good job. I compared the accuracy on Cora with this repo and GCN sample codes from PyG and DGL. Surprisingly, result from this one is about 2 to 3 points better (0.83 v.s. 0.81). Is it because of the slightly different adjacency matrix? Thanks a lot.

chenzhao avatar Jan 29 '21 09:01 chenzhao