To convert the Model to TorchScript and Inferencing in C++
Hi @erikwijmans ,
Have you tried to convert the pointnet2 model you wrote to Torch script for inferencing in C++?
I've met some issues. One of the issues is about the attribute 'npoint' in _PointnetSAModuleBase class.
class _PointnetSAModuleBase(torch.jit.ScriptModule):
# __constants__ = ['npoint']
def __init__(self):
super(_PointnetSAModuleBase, self).__init__()
# self.register_buffer('npoint', None)
self.npoint = None
self.groupers = None
self.mlps = None
@torch.jit.script_method
def forward(self, xyz, features=None):
It throws out a runtime error:
RuntimeError:
attribute 'npoint' of type 'int' is not usable in a script method (did you forget to add it __constants__?):
...
pointnet2_utils.gather_operation(
xyz_flipped, pointnet2_utils.furthest_point_sample(xyz, self.npoint)
)
.transpose(1, 2)
.contiguous()
if self.npoint is not None
~~~~~~~~~~~ <--- HERE
else None
)
...
I tried to put __constants__ = ['npoint'] one line before the def __init__(self):. However, that doesn't work. npoint should be a variable and its default value is None and could be assigned to int eventually.
Do you have any idea how to fix this? Thanks in advance!
I haven't tried torchscript for this repo and I have no idea how c++ extensions interact with torchscript. For npoint, you can try annotating as Optional[int] to see if that makes torchscript happy.
@jb892 Do you convert to torchscript successfully ? I meet the similar questions in this process...
@doublexxking Nope, I failed to do so. But I rewrite the whole network manually by caffe2 and load weights from a trained PyTorch model. It works perfectly for me. I hope it helps!