Matrix-Capsules-EM-PyTorch icon indicating copy to clipboard operation
Matrix-Capsules-EM-PyTorch copied to clipboard

A PyTorch Implementation of Matrix Capsules with EM Routing

Results 7 Matrix-Capsules-EM-PyTorch issues
Sort by recently updated
recently updated
newest added

ConvCaps的m-step成员函数,第一行代码。 `r = r* a_in. 该函数的初值 r 是形状是 (b, B, C, 1), 而 a_in 的形状是 (b, C, 1). 不能广播,直接做哈达玛积会报错的。是不是需要对 a_in unsqueeze一下?

I went to the test of hinton's code [https://github.com/google-research/google-research/tree/master/capsule_em](url) The result can achieve the level of the paper But I tried to test smallNORB with your code, but the accuracy...

这个 地方如果是class_caps 类型,这个地方的地方是否应该修改 为 修改代码 ``` if w_shared: hw = int(B / w.size(1)) w = w.repeat(1, hw, 1, 1, 1) else: w = w.repeat(b, 1, 1, 1, 1) ```` 原代码...

The paper reaches a much lower Err-rate(about 1.5%) for smallNORB, but it seems that no one here can achieve the level of state-of-art. i wonder why : )

I think there is an issue in the way the input tensor `x` is reshaped in order to extract `a_in` and `p_in`. It seems to me that the dimensions of...

https://www.kaggle.com/code/tom99763/matrix-capsules-with-em-routing?scriptVersionId=124263074 我發現使用0.001的learning rate沒辦法訓練這個模型(mnist的準確度卡在0.1), 我想知道為什麼訓練這篇paper的模型learning初始要大到0.01, 是gradient的問題嗎? 還有我有發現kernel_tile的運算產生的gradient非常大, 或許是這個問題?