NRGNN
NRGNN copied to clipboard
Some inconsistence in the paper and the code
Hi, Dai There is some inconsistence in the paper and the code listed below.
1---It is writen that f_p and f_e will pretrain in the paper, but in the code it seems you just compute the feature cos simmilarities to get the potential edge set at the very beginning(and this is very essential. Without this step the performance greatlydecreases). I don't see any pretrain step.
2---The total loss in the paper are composed with L_E(reconstruction loss), L_p(the crossentropy loss of the pseudo label predictor on training set) and L_G(the crossentropy loss of the final classifier). It is writen that argmin L_G + αL_E + βL_P On contrary, the line 133 in NRGNN.py, "total_loss = loss_gcn + loss_pred + self.args.alpha * rec_loss + self.args.beta * loss_add", the loss_add is not consistent with the lossL_p. Apparently there are four components in the code, and the loss_pred is the L_P in the paper. Is there any details about loss_add in paper that i missed?
Thanks
I also wanna know something about the second question, which confuses me deeply. What's more, I have repeated the experiments in the paper and the accuracy decreases several percentages. I think it should be related to the two questions above.
I agree with you. I think this operation is very tricky, since 'loss_add' is obtained through the best prediction of previous epochs of pseudo label miner, which is unfair for other baselines. Moreover, this trick can improve the performance greatly according to my ablation.
Hey I was trying to implement the code but it gives error : one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [0]], which is output 0 of ReluBackward0, is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
Did anyone else face this issue what was the fix that you guys trued
I encountered this error here, please ignore the line numbers here :
`16 esgnn = NRGNN(args,device) ---> 17 esgnn.fit(features, adj, noise_labels, idx_train, idx_val) 18 19 print("=====test set accuracy=======")
3 frames
I think it may be caused by the version of packages.
Hey I was trying to implement the code but it gives error : one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [0]], which is output 0 of ReluBackward0, is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
Did anyone else face this issue what was the fix that you guys trued
I encountered this error here, please ignore the line numbers here :
`16 esgnn = NRGNN(args,device) ---> 17 esgnn.fit(features, adj, noise_labels, idx_train, idx_val) 18 19 print("=====test set accuracy=======")
3 frames in fit(self, features, adj, labels, idx_train, idx_val) 58 for epoch in range(args.epochs): 59 print(epoch) ---> 60 self.train(epoch, features, edge_index, idx_train, idx_val) 61 62 print("Optimization Finished!")
in train(self, epoch, features, edge_index, idx_train, idx_val) 118 119 total_loss = loss_gcn + loss_pred + self.args.alpha * rec_loss + self.args.beta * loss_add --> 120 total_loss.backward() 121 self.optimizer.step() 122`
It hints that problem happen in Relu() function.
just add detach() at :
estimated_weights = F.relu(output.detach())
@pintu-dot