GraphMAE icon indicating copy to clipboard operation
GraphMAE copied to clipboard

Clarification on Inductive Setting in Graph Neural Networks

Open SangwookBaek opened this issue 1 year ago • 0 comments

Dear THUDM,

Firstly, I would like to express my sincere gratitude for your efforts. Your work has been immensely helpful.

I have a query regarding the implementation of the inductive setting in graph neural networks, as described in your paper. In my current understanding, I'm loading the dataset as per the code below in an inductive setting. According to section 4.1 of your paper, the inductive setting follows the GraphSAGE methodology.

train_mask = g.ndata['train_mask']
feat = g.ndata["feat"]
feat = scale_feats(feat)
g.ndata["feat"] = feat

g = g.remove_self_loop()
g = g.add_self_loop()

train_nid = np.nonzero(train_mask.data.numpy())[0].astype(np.int64)
train_g = dgl.node_subgraph(g, train_nid)
train_dataloader = [train_g]
valid_dataloader = [g]
test_dataloader = valid_dataloader
eval_train_dataloader = [train_g]

My question is: In this inductive setting, are we sampling a subgraph based on the nodes in the 'Train_mask', and then passing this subgraph through a GAE (Graph AutoEncoder) structure? Following this, is the model then validated against the entire graph, particularly focusing on the 'valid mask' or 'test mask' nodes?

Additionally, I would greatly appreciate if you could elaborate on the differences between the inductive and transductive settings in this context.

Thank you for your time and assistance in this matter.

SangwookBaek avatar Jan 11 '24 01:01 SangwookBaek