CapsGNN icon indicating copy to clipboard operation
CapsGNN copied to clipboard

CapsGNN (Loss=nan)?

Open diesel248 opened this issue 4 years ago • 13 comments

I was trying to run this code but got this error. See pic below

2020-05-18_231633

diesel248 avatar May 19 '20 03:05 diesel248

In layers.py, add a line b_ij = b_ij + u_vj1 before line 143 b_max = torch.max(b_ij, dim = 2, keepdim = True)

lishu0716 avatar May 20 '20 10:05 lishu0716

In layers.py, add a line b_ij = b_ij + u_vj1 before line 143 b_max = torch.max(b_ij, dim = 2, keepdim = True)

Hi,

Thank you for your help. Actually, I checked the author's commit history and have already added this line. It worked on the small size dataset (30 train, 30 test), but still got the same problem on the large size dataset (1000 train, 1000 test) after several iterations. And the predictions are all 0.

image

diesel248 avatar May 20 '20 16:05 diesel248

In layers.py, add a line b_ij = b_ij + u_vj1 before line 143 b_max = torch.max(b_ij, dim = 2, keepdim = True)

Hi,

Thank you for your help. Actually, I checked the author's commit history and have already added this line. It worked on the small size dataset (30 train, 30 test), but still got the same problem on the large size dataset (1000 train, 1000 test) after several iterations. And the predictions are all 0.

image

Hi! I have meet the same problems with you! Have you got the solutions?

imSeaton avatar May 31 '20 11:05 imSeaton

In layers.py, add a line b_ij = b_ij + u_vj1 before line 143 b_max = torch.max(b_ij, dim = 2, keepdim = True)

Hi, Thank you for your help. Actually, I checked the author's commit history and have already added this line. It worked on the small size dataset (30 train, 30 test), but still got the same problem on the large size dataset (1000 train, 1000 test) after several iterations. And the predictions are all 0. image

Hi! I have meet the same problems with you! Have you got the solutions?

Not yet. Still waiting for someone's help.

diesel248 avatar Jun 02 '20 00:06 diesel248

In layers.py, add a line b_ij = b_ij + u_vj1 before line 143 b_max = torch.max(b_ij, dim = 2, keepdim = True)

same problem, need help!

holoword avatar Jun 25 '20 14:06 holoword

Graph level classification, how to add batchsize?

dtzfast avatar Jul 04 '20 09:07 dtzfast

wow

jack6756 avatar Oct 25 '20 00:10 jack6756

In layers.py, add a line b_ij = b_ij + u_vj1 before line 143 b_max = torch.max(b_ij, dim = 2, keepdim = True)

same problem when epoch is 20, accuracy is only 0.33, and loss is around 2.5

Wanghongyu97 avatar Nov 26 '20 06:11 Wanghongyu97

In layers.py, add a line b_ij = b_ij + u_vj1 before line 143 b_max = torch.max(b_ij, dim = 2, keepdim = True)

same problem when epoch is 20, accuracy is only 0.33, and loss is around 2.5

大兄弟,一起撸他的代码呀。我觉得他的代码中可能存在一些问题,例如:1、attention模块之前,tensor 的view操作打乱了数据分布,hidden_representation那里的view也是。2、attention模块和论文里的有点不一样。3、squash操作中,|mag|作为除数没有加小数防止溢出。4、正常的胶囊网络算法中,动态路由的前两次迭代中capsule是不要梯度的,应该用detach()隔绝一下,他这里没有这么做。 大兄弟,要不要加个QQ一起交流呀?

imSeaton avatar Nov 26 '20 08:11 imSeaton

Graph level classification, how to add batchsize?

If the graph classification algorithm uses the DGL framework, it can divided a graph into mini-batches to accelerate the training.However, in my opinion, the author only uses the concept of batch to compute the average loss of a batch without distributed compution in the CapsGNN code above.

imSeaton avatar Nov 27 '20 04:11 imSeaton

In layers.py, add a line b_ij = b_ij + u_vj1 before line 143 b_max = torch.max(b_ij, dim = 2, keepdim = True)

same problem when epoch is 20, accuracy is only 0.33, and loss is around 2.5

大兄弟,一起撸他的代码呀。我觉得他的代码中可能存在一些问题,例如:1、attention模块之前,tensor 的view操作打乱了数据分布,hidden_representation那里的view也是。2、attention模块和论文里的有点不一样。3、squash操作中,|mag|作为除数没有加小数防止溢出。4、正常的胶囊网络算法中,动态路由的前两次迭代中capsule是不要梯度的,应该用detach()隔绝一下,他这里没有这么做。 大兄弟,要不要加个QQ一起交流呀?

他这里面的维度变换真的很迷,特别是路由部分,真的有必要搞得如此复杂吗?

zhangxin9988 avatar Jul 21 '21 02:07 zhangxin9988

In layers.py, add a line b_ij = b_ij + u_vj1 before line 143 b_max = torch.max(b_ij, dim = 2, keepdim = True)

same problem when epoch is 20, accuracy is only 0.33, and loss is around 2.5

大兄弟,一起撸他的代码呀。我觉得他的代码中可能存在一些问题,例如:1、attention模块之前,tensor 的view操作打乱了数据分布,hidden_representation那里的view也是。2、attention模块和论文里的有点不一样。3、squash操作中,|mag|作为除数没有加小数防止溢出。4、正常的胶囊网络算法中,动态路由的前两次迭代中capsule是不要梯度的,应该用detach()隔绝一下,他这里没有这么做。 大兄弟,要不要加个QQ一起交流呀?

他这里面的维度变换真的很迷,特别是路由部分,真的有必要搞得如此复杂吗? github上有人复现了,可以参考shamnastv/GraphCaps

Wanghongyu97 avatar Jul 21 '21 02:07 Wanghongyu97

In layers.py, add a line b_ij = b_ij + u_vj1 before line 143 b_max = torch.max(b_ij, dim = 2, keepdim = True)

same problem when epoch is 20, accuracy is only 0.33, and loss is around 2.5

大兄弟,一起撸他的代码呀。我觉得他的代码中可能存在一些问题,例如:1、attention模块之前,tensor 的view操作打乱了数据分布,hidden_representation那里的view也是。2、attention模块和论文里的有点不一样。3、squash操作中,|mag|作为除数没有加小数防止溢出。4、正常的胶囊网络算法中,动态路由的前两次迭代中capsule是不要梯度的,应该用detach()隔绝一下,他这里没有这么做。 大兄弟,要不要加个QQ一起交流呀?

加个qq交流一下不?

zezeze97 avatar Oct 28 '21 06:10 zezeze97