CapsGNN
CapsGNN copied to clipboard
CapsGNN (Loss=nan)?
I was trying to run this code but got this error. See pic below
In layers.py, add a line b_ij = b_ij + u_vj1
before line 143 b_max = torch.max(b_ij, dim = 2, keepdim = True)
In layers.py, add a line
b_ij = b_ij + u_vj1
before line 143b_max = torch.max(b_ij, dim = 2, keepdim = True)
Hi,
Thank you for your help. Actually, I checked the author's commit history and have already added this line. It worked on the small size dataset (30 train, 30 test), but still got the same problem on the large size dataset (1000 train, 1000 test) after several iterations. And the predictions are all 0.
In layers.py, add a line
b_ij = b_ij + u_vj1
before line 143b_max = torch.max(b_ij, dim = 2, keepdim = True)
Hi,
Thank you for your help. Actually, I checked the author's commit history and have already added this line. It worked on the small size dataset (30 train, 30 test), but still got the same problem on the large size dataset (1000 train, 1000 test) after several iterations. And the predictions are all 0.
Hi! I have meet the same problems with you! Have you got the solutions?
In layers.py, add a line
b_ij = b_ij + u_vj1
before line 143b_max = torch.max(b_ij, dim = 2, keepdim = True)
Hi, Thank you for your help. Actually, I checked the author's commit history and have already added this line. It worked on the small size dataset (30 train, 30 test), but still got the same problem on the large size dataset (1000 train, 1000 test) after several iterations. And the predictions are all 0.
Hi! I have meet the same problems with you! Have you got the solutions?
Not yet. Still waiting for someone's help.
In layers.py, add a line
b_ij = b_ij + u_vj1
before line 143b_max = torch.max(b_ij, dim = 2, keepdim = True)
same problem, need help!
Graph level classification, how to add batchsize?
wow
In layers.py, add a line
b_ij = b_ij + u_vj1
before line 143b_max = torch.max(b_ij, dim = 2, keepdim = True)
same problem when epoch is 20, accuracy is only 0.33, and loss is around 2.5
In layers.py, add a line
b_ij = b_ij + u_vj1
before line 143b_max = torch.max(b_ij, dim = 2, keepdim = True)
same problem when epoch is 20, accuracy is only 0.33, and loss is around 2.5
大兄弟,一起撸他的代码呀。我觉得他的代码中可能存在一些问题,例如:1、attention模块之前,tensor 的view操作打乱了数据分布,hidden_representation那里的view也是。2、attention模块和论文里的有点不一样。3、squash操作中,|mag|作为除数没有加小数防止溢出。4、正常的胶囊网络算法中,动态路由的前两次迭代中capsule是不要梯度的,应该用detach()隔绝一下,他这里没有这么做。 大兄弟,要不要加个QQ一起交流呀?
Graph level classification, how to add batchsize?
If the graph classification algorithm uses the DGL framework, it can divided a graph into mini-batches to accelerate the training.However, in my opinion, the author only uses the concept of batch to compute the average loss of a batch without distributed compution in the CapsGNN code above.
In layers.py, add a line
b_ij = b_ij + u_vj1
before line 143b_max = torch.max(b_ij, dim = 2, keepdim = True)
same problem when epoch is 20, accuracy is only 0.33, and loss is around 2.5
大兄弟,一起撸他的代码呀。我觉得他的代码中可能存在一些问题,例如:1、attention模块之前,tensor 的view操作打乱了数据分布,hidden_representation那里的view也是。2、attention模块和论文里的有点不一样。3、squash操作中,|mag|作为除数没有加小数防止溢出。4、正常的胶囊网络算法中,动态路由的前两次迭代中capsule是不要梯度的,应该用detach()隔绝一下,他这里没有这么做。 大兄弟,要不要加个QQ一起交流呀?
他这里面的维度变换真的很迷,特别是路由部分,真的有必要搞得如此复杂吗?
In layers.py, add a line
b_ij = b_ij + u_vj1
before line 143b_max = torch.max(b_ij, dim = 2, keepdim = True)
same problem when epoch is 20, accuracy is only 0.33, and loss is around 2.5
大兄弟,一起撸他的代码呀。我觉得他的代码中可能存在一些问题,例如:1、attention模块之前,tensor 的view操作打乱了数据分布,hidden_representation那里的view也是。2、attention模块和论文里的有点不一样。3、squash操作中,|mag|作为除数没有加小数防止溢出。4、正常的胶囊网络算法中,动态路由的前两次迭代中capsule是不要梯度的,应该用detach()隔绝一下,他这里没有这么做。 大兄弟,要不要加个QQ一起交流呀?
他这里面的维度变换真的很迷,特别是路由部分,真的有必要搞得如此复杂吗? github上有人复现了,可以参考shamnastv/GraphCaps
In layers.py, add a line
b_ij = b_ij + u_vj1
before line 143b_max = torch.max(b_ij, dim = 2, keepdim = True)
same problem when epoch is 20, accuracy is only 0.33, and loss is around 2.5
大兄弟,一起撸他的代码呀。我觉得他的代码中可能存在一些问题,例如:1、attention模块之前,tensor 的view操作打乱了数据分布,hidden_representation那里的view也是。2、attention模块和论文里的有点不一样。3、squash操作中,|mag|作为除数没有加小数防止溢出。4、正常的胶囊网络算法中,动态路由的前两次迭代中capsule是不要梯度的,应该用detach()隔绝一下,他这里没有这么做。 大兄弟,要不要加个QQ一起交流呀?
加个qq交流一下不?