RGM
RGM copied to clipboard
matmul issue while performing caliters_perm
RuntimeError Traceback (most recent call last)
/tmp/ipykernel_3678/3876408280.py in
/tmp/ipykernel_3678/871858585.py in caliters_perm(model, P1_gt_copy, P2_gt_copy, A1_gt, A2_gt, n1_gt, n2_gt, estimate_iters) 228 for estimate_iter in range(estimate_iters): 229 s_prem_i, Inlier_src_pre, Inlier_ref_pre = model(P1_gt_copy, P2_gt_copy, --> 230 A1_gt, A2_gt, n1_gt, n2_gt) 231 if cfg.PGM.USEINLIERRATE: 232 s_prem_i = Inlier_src_pre * s_prem_i * Inlier_ref_pre.transpose(2, 1).contiguous()
/opt/conda/envs/fcgf/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 887 result = self._slow_forward(*input, **kwargs) 888 else: --> 889 result = self.forward(*input, **kwargs) 890 for hook in itertools.chain( 891 _global_forward_hooks.values(),
/tmp/ipykernel_3678/871858585.py in forward(self, P_src, P_tgt, A_src, A_tgt, ns_src, ns_tgt) 101 emb_src, emb_tgt = gnn_layer([A_src1, emb_src], [A_tgt1, emb_tgt]) 102 else: --> 103 emb_src, emb_tgt = gnn_layer([A_src, emb_src], [A_tgt, emb_tgt]) 104 affinity = getattr(self, 'affinity_{}'.format(i)) 105 # emb_src_norm = torch.norm(emb_src, p=2, dim=2, keepdim=True).detach()
/opt/conda/envs/fcgf/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 887 result = self._slow_forward(*input, **kwargs) 888 else: --> 889 result = self.forward(*input, **kwargs) 890 for hook in itertools.chain( 891 _global_forward_hooks.values(),
/home/ubuntu/PointCloudRegistration/AIModels/RGM/models/gconv.py in forward(self, g1, g2) 34 35 def forward(self, g1, g2): ---> 36 emb1 = self.gconv(*g1) 37 emb2 = self.gconv(*g2) 38 # embx are tensors of size (bs, N, num_features)
/opt/conda/envs/fcgf/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 887 result = self._slow_forward(*input, **kwargs) 888 else: --> 889 result = self.forward(*input, **kwargs) 890 for hook in itertools.chain( 891 _global_forward_hooks.values(),
/home/ubuntu/PointCloudRegistration/AIModels/RGM/models/gconv.py in forward(self, A, x, norm) 19 A = F.normalize(A, p=1, dim=-2) 20 print(x.shape) ---> 21 ax = self.a_fc(x) 22 ux = self.u_fc(x) 23 x = torch.bmm(A, F.relu(ax)) + F.relu(ux) # has size (bs, N, num_outputs)
/opt/conda/envs/fcgf/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 887 result = self._slow_forward(*input, **kwargs) 888 else: --> 889 result = self.forward(*input, **kwargs) 890 for hook in itertools.chain( 891 _global_forward_hooks.values(),
/opt/conda/envs/fcgf/lib/python3.7/site-packages/torch/nn/modules/linear.py in forward(self, input) 92 93 def forward(self, input: Tensor) -> Tensor: ---> 94 return F.linear(input, self.weight, self.bias) 95 96 def extra_repr(self) -> str:
/opt/conda/envs/fcgf/lib/python3.7/site-packages/torch/nn/functional.py in linear(input, weight, bias) 1751 if has_torch_function_variadic(input, weight): 1752 return handle_torch_function(linear, (input, weight), input, weight, bias=bias) -> 1753 return torch._C._nn.linear(input, weight, bias) 1754 1755
RuntimeError: mat1 and mat2 shapes cannot be multiplied (76x1024 and 640x1024)
I have met the same problem, have you solved it now?
not yet.
Change the FEATURE_NODE_CHANNEL and FEATURE_EDGE_CHANNEL may help, it should be caused by nn.linear(in_features, out_features), previous out_features need to be equal to the next in_features
If you haven't modified the source code, you can see “self.a_fc” is (input channel =1024, output channel = 512) in line 21 of "gconv.py".