IGNN
IGNN copied to clipboard
Weight projection during testing
Hi there @SwiftieH , thanks for your impressive work!
On looking at the code in nodeclassification/layers.py. I am a little confused about the weight projection. It's certain that during training process, every time the forward() is called, self.W is normalized (||W||_inf < kappa/rho(A))
def forward(self, X_0, A, U, phi, A_rho=1.0, fw_mitr=500, bw_mitr=500, A_orig=None):
"""Allow one to use a different A matrix for convolution operation in equilibrium equ"""
if self.k is not None: # when self.k = 0, A_rho is not required
self.W = projection_norm_inf(self.W, kappa=self.k/A_rho)
support_1 = torch.spmm(torch.transpose(U, 0, 1), `self.Omega_1.T).T`
support_1 = torch.spmm(torch.transpose(A, 0, 1), support_1.T).T
support_2 = torch.spmm(torch.transpose(U, 0, 1), self.Omega_2.T).T
b_Omega = support_1 #+ support_2
return ImplicitFunction.apply(self.W, X_0, A if A_orig is None else A_orig, b_Omega, phi, fw_mitr, bw_mitr)
It seems like in the testing process, the weight (learned from training) is normalized again before being passed into the implicit function, is it what supposed to do? Or should we just use the 'original' weight from the training process and not normalize again in the testing process, which can be achieved by using e.g.
if self.training:
if self.k is not None: # when self.k = 0, A_rho is not required
self.W = projection_norm_inf(self.W, kappa=self.k/A_rho)
Thanks!