torch2trt
torch2trt copied to clipboard
Fails to convert PyTorch Mask R-CNN Resnet50 FPN model
I'm trying to convert this model:
model = torchvision.models.detection.maskrcnn_resnet50_fpn(pretrained=True)
This is the code of the conversion:
model = torch.load('/path/to/model/trained_model.pt')
model.eval()
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
#dummy input
img = cv2.imread('/path/to/images/something.jpg')
transform = T.Compose([T.ToTensor()])
img = transform(img)
img = img.to(device).unsqueeze_(0)
model = torch2trt(model, [img])
These are the warning I get:
Warning: Encountered known unsupported method torch.Tensor.unbind
Warning: Encountered known unsupported method torch.Tensor.__iter__
Warning: Encountered known unsupported method torch.Tensor.unbind
Warning: Encountered known unsupported method torch.Tensor.__iter__
Warning: Encountered known unsupported method torch.Tensor.is_floating_point
Warning: Encountered known unsupported method torch.as_tensor
Warning: Encountered known unsupported method torch.as_tensor
Then I get this output:
AttributeError Traceback (most recent call last)
<ipython-input-2-9e433d9c0209> in <module>
7 img = transform(img)
8 img = img.to(device, non_blocking=True).unsqueeze_(0)
----> 9 model = torch2trt(model, [img])
10
11 # Set to evaluation mode
/opt/conda/lib/python3.7/site-packages/torch2trt-0.2.0-py3.7-linux-x86_64.egg/torch2trt/torch2trt.py in torch2trt(module, inputs, input_names, output_names, log_level, max_batch_size, fp16_mode, max_workspace_size, strict_type_constraints, keep_network, int8_mode, int8_calib_dataset, int8_calib_algorithm, int8_calib_batch_size, use_onnx, **kwargs)
540 ctx.add_inputs(inputs, input_names)
541
--> 542 outputs = module(*inputs)
543
544 if not isinstance(outputs, tuple) and not isinstance(outputs, list):
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
/opt/conda/lib/python3.7/site-packages/torchvision/models/detection/generalized_rcnn.py in forward(self, images, targets)
76 original_image_sizes.append((val[0], val[1]))
77
---> 78 images, targets = self.transform(images, targets)
79
80 # Check for degenerate boxes
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
/opt/conda/lib/python3.7/site-packages/torchvision/models/detection/transform.py in forward(self, images, targets)
101 raise ValueError("images is expected to be a list of 3d tensors "
102 "of shape [C, H, W], got {}".format(image.shape))
--> 103 image = self.normalize(image)
104 image, target_index = self.resize(image, target_index)
105 images[i] = image
/opt/conda/lib/python3.7/site-packages/torchvision/models/detection/transform.py in normalize(self, image)
126 mean = torch.as_tensor(self.image_mean, dtype=dtype, device=device)
127 std = torch.as_tensor(self.image_std, dtype=dtype, device=device)
--> 128 return (image - mean[:, None, None]) / std[:, None, None]
129
130 def torch_choice(self, k):
/opt/conda/lib/python3.7/site-packages/torch2trt-0.2.0-py3.7-linux-x86_64.egg/torch2trt/torch2trt.py in wrapper(*args, **kwargs)
287
288 # print('%s' % (converter.__name__,))
--> 289 converter["converter"](ctx)
290
291 # convert to None so conversion will fail for unsupported layers
/opt/conda/lib/python3.7/site-packages/torch2trt-0.2.0-py3.7-linux-x86_64.egg/torch2trt/converters/getitem.py in convert_tensor_getitem(ctx)
28 output = ctx.method_return
29
---> 30 input_trt = input._trt
31
32 # Step 1 - Replace ellipsis with expanded slices
AttributeError: 'Tensor' object has no attribute '_trt'
I don't understand why I get this error from torch2trt/converters/getitem.py nor what I have to do to make it work, if I understand correctly the converter for the getitem function already exists, right?
Is there a quick fix to solve this problem?
Is there going to support the detection-like models in torchvision? There is some tests for torchvision's classification and segmentation.
@leaf918 Did you find a fix for this? I see the same on Keypoint R-CNN here: #761