HigherHRNet-Human-Pose-Estimation icon indicating copy to clipboard operation
HigherHRNet-Human-Pose-Estimation copied to clipboard

libtorch or tensorrt

Open zhanghongruiupup opened this issue 5 years ago • 12 comments

hi,thank your code. Can this model be converted to Library or tensorRT? hope your relpy!

zhanghongruiupup avatar Dec 05 '19 09:12 zhanghongruiupup

hi,thank your code. Can this model be converted to Libtorch or tensorRT? hope your reply!

zhanghongruiupup avatar Dec 05 '19 10:12 zhanghongruiupup

I try to convert the model to .pt, but failed

Number of Layers Conv2d : 302 layers BatchNorm2d : 301 layers ReLU : 270 layers Bottleneck : 4 layers BasicBlock : 108 layers Upsample : 28 layers HighResolutionModule : 8 layers ConvTranspose2d : 1 layers
=> loading model from output\coco_kpt\pose_higher_hrnet\w32_512_adam_lr1e-3\model_best.pth.tar loading annotations into memory... => classes: ['background', 'ball'] Done (t=0.02s) creating index... index created! Traceback (most recent call last): File "D:/gc/Higher-HRNet-Human-Pose-Estimation/tools/valid.py", line 231, in main() File "D:/gc/Higher-HRNet-Human-Pose-Estimation/tools/valid.py", line 154, in main traced_script_module = torch.jit.trace(model, example) File "D:\Users\admin\Anaconda3\lib\site-packages\torch\jit_init_.py", line 858, in trace check_tolerance, _force_outplace, module_class) File "D:\Users\admin\Anaconda3\lib\site-packages\torch\jit_init.py", line 991, in trace_module module = make_module(mod, _module_class, compilation_unit, tuple(inputs.keys())) File "D:\Users\admin\Anaconda3\lib\site-packages\torch\jit_init.py", line 709, in make_module return module_class(mod, compilation_unit=compilation_unit) File "D:\Users\admin\Anaconda3\lib\site-packages\torch\jit_init.py", line 1462, in init_then_register original_init(self, *args, **kwargs) File "D:\Users\admin\Anaconda3\lib\site-packages\torch\jit_init.py", line 1462, in init_then_register original_init(self, *args, **kwargs) File "D:\Users\admin\Anaconda3\lib\site-packages\torch\jit_init.py", line 1733, in init self._modules[name] = make_module(submodule, TracedModule, compilation_unit) File "D:\Users\admin\Anaconda3\lib\site-packages\torch\jit_init.py", line 709, in make_module return _module_class(mod, compilation_unit=compilation_unit) File "D:\Users\admin\Anaconda3\lib\site-packages\torch\jit_init.py", line 1462, in init_then_register original_init(self, *args, **kwargs) File "D:\Users\admin\Anaconda3\lib\site-packages\torch\jit_init.py", line 1733, in init self._modules[name] = make_module(submodule, TracedModule, compilation_unit) File "D:\Users\admin\Anaconda3\lib\site-packages\torch\jit_init.py", line 709, in make_module return _module_class(mod, compilation_unit=compilation_unit) File "D:\Users\admin\Anaconda3\lib\site-packages\torch\jit_init.py", line 1462, in init_then_register original_init(self, *args, **kwargs) File "D:\Users\admin\Anaconda3\lib\site-packages\torch\jit_init.py", line 1733, in init self._modules[name] = make_module(submodule, TracedModule, compilation_unit) File "D:\Users\admin\Anaconda3\lib\site-packages\torch\jit_init.py", line 709, in make_module return _module_class(mod, compilation_unit=compilation_unit) File "D:\Users\admin\Anaconda3\lib\site-packages\torch\jit_init.py", line 1462, in init_then_register original_init(self, *args, **kwargs) File "D:\Users\admin\Anaconda3\lib\site-packages\torch\jit_init.py", line 1733, in init self._modules[name] = make_module(submodule, TracedModule, compilation_unit) File "D:\Users\admin\Anaconda3\lib\site-packages\torch\jit_init.py", line 709, in make_module return _module_class(mod, compilation_unit=compilation_unit) File "D:\Users\admin\Anaconda3\lib\site-packages\torch\jit_init.py", line 1462, in init_then_register original_init(self, *args, **kwargs) File "D:\Users\admin\Anaconda3\lib\site-packages\torch\jit_init.py", line 1733, in init self._modules[name] = make_module(submodule, TracedModule, compilation_unit) File "D:\Users\admin\Anaconda3\lib\site-packages\torch\jit_init.py", line 709, in make_module return _module_class(mod, compilation_unit=compilation_unit) File "D:\Users\admin\Anaconda3\lib\site-packages\torch\jit_init.py", line 1462, in init_then_register original_init(self, *args, **kwargs) File "D:\Users\admin\Anaconda3\lib\site-packages\torch\jit_init.py", line 1733, in init self._modules[name] = make_module(submodule, TracedModule, compilation_unit) File "D:\Users\admin\Anaconda3\lib\site-packages\torch\jit_init.py", line 709, in make_module return _module_class(mod, compilation_unit=compilation_unit) File "D:\Users\admin\Anaconda3\lib\site-packages\torch\jit_init.py", line 1462, in init_then_register original_init(self, *args, **kwargs) File "D:\Users\admin\Anaconda3\lib\site-packages\torch\jit_init.py", line 1710, in init assert(isinstance(orig, torch.nn.Module)) AssertionError

Process finished with exit code 1

121649982 avatar Jan 04 '20 01:01 121649982

I try to convert HRNet model for human pose estimation(https://github.com/leoxiaobin/deep-high-resolution-net.pytorch) to TensorRT and success, however the outputs of the unconverted model and converted model were significantly different.

increase24 avatar Feb 01 '20 17:02 increase24

I try to convert HRNet model for human pose estimation(https://github.com/leoxiaobin/deep-high-resolution-net.pytorch) to TensorRT and success, however the outputs of the unconverted model and converted model were significantly different.

May I know exactly how did you convert it to TensorRT?

H19012 avatar Mar 11 '20 05:03 H19012

tensorrt don't support upsample

121649982 avatar Mar 26 '20 03:03 121649982

I try to convert HRNet model for human pose estimation(https://github.com/leoxiaobin/deep-high-resolution-net.pytorch) to TensorRT and success, however the outputs of the unconverted model and converted model were significantly different.

Have you managed to solve it?

121649982 avatar Mar 26 '20 03:03 121649982

tensorrt don't support upsample

It does, what it does not support the Upsample layer is the ONNX-TRT parser. However if you use a TORCH-TRT parser (such as torch2trt) you can get the Upsample layer to work in TensorRT.

JVGD avatar Mar 30 '20 09:03 JVGD

tensorrt don't support upsample

It does, what it does not support the Upsample layer is the ONNX-TRT parser. However if you use a TORCH-TRT parser (such as torch2trt) you can get the Upsample layer to work in TensorRT.

when I try to convert the pytorch model to trt model ,I get this error:

Done (t=0.00s) creating index... index created! Warning: Encountered known unsupported method torch.Tensor.get_device /home/liu/.conda/envs/python37/lib/python3.7/site-packages/torch/nn/modules/upsampling.py:129: UserWarning: nn.Upsample is deprecated. Use nn.functional.interpolate instead. warnings.warn("nn.{} is deprecated. Use nn.functional.interpolate instead.".format(self.name)) Warning: Encountered known unsupported method torch.nn.functional.interpolate Warning: Encountered known unsupported method torch.nn.functional.interpolate Warning: Encountered known unsupported method torch.nn.functional.interpolate Warning: Encountered known unsupported method torch.nn.functional.interpolate Warning: Encountered known unsupported method torch.nn.functional.interpolate Warning: Encountered known unsupported method torch.nn.functional.interpolate Warning: Encountered known unsupported method torch.nn.functional.interpolate Warning: Encountered known unsupported method torch.nn.functional.interpolate Warning: Encountered known unsupported method torch.nn.functional.interpolate Warning: Encountered known unsupported method torch.nn.functional.interpolate Warning: Encountered known unsupported method torch.nn.functional.interpolate Warning: Encountered known unsupported method torch.nn.functional.interpolate Warning: Encountered known unsupported method torch.nn.functional.interpolate Warning: Encountered known unsupported method torch.nn.functional.interpolate Warning: Encountered known unsupported method torch.nn.functional.interpolate Warning: Encountered known unsupported method torch.nn.functional.interpolate Warning: Encountered known unsupported method torch.nn.functional.interpolate Warning: Encountered known unsupported method torch.nn.functional.interpolate Warning: Encountered known unsupported method torch.nn.functional.interpolate Warning: Encountered known unsupported method torch.nn.functional.interpolate Warning: Encountered known unsupported method torch.nn.functional.interpolate Warning: Encountered known unsupported method torch.nn.functional.interpolate Warning: Encountered known unsupported method torch.nn.functional.interpolate Warning: Encountered known unsupported method torch.nn.functional.interpolate Warning: Encountered known unsupported method torch.nn.functional.interpolate Warning: Encountered known unsupported method torch.nn.functional.interpolate Warning: Encountered known unsupported method torch.nn.functional.interpolate Warning: Encountered known unsupported method torch.nn.functional.interpolate [TensorRT] ERROR: (Unnamed Layer* 1036) [Concatenation]: all concat input tensors must have the same dimensions except on the concatenation axis (0), but dimensions mismatched at input 1 at index 1. Input 0 shape: [1,32,128,160], Input 1 shape: [1,12,128,160] Traceback (most recent call last):

121649982 avatar Mar 30 '20 10:03 121649982

could you tell me how to convert it ?thanks

121649982 avatar Mar 30 '20 11:03 121649982

@121649982 , in your case the problem seems to lay here: Warning: Encountered known unsupported method torch.Tensor.get_device /home/liu/.conda/envs/python37/lib/python3.7/site-packages/torch/nn/modules/upsampling.py:129: UserWarning: nn.Upsample is deprecated. Use nn.functional.interpolate instead.

Then in here:

[TensorRT] ERROR: (Unnamed Layer* 1036) [Concatenation]: all concat input tensors must have the same dimensions except on the concatenation axis (0), but dimensions mismatched at input 1 at index 1. Input 0 shape: [1,32,128,160], Input 1 shape: [1,12,128,160]

It seems you are trying to concatenate tensor1: [1,32,128,160] and tensor2: [1,12,128,160] on dimension 0 (first one) and it seems to me you want to concatenate on dimension 1, since for the concatenation operation all the dimensions must match except in the concatenation dimension.

But looking at the problem in perspective, I think that, under the hood, this is a problem related to the ONN-TRT parser. Because of your first warning. You trying to export the model to ONNX before exporting it to TRT, and it happens that the Upsample layer it is not yet supported on the ONNX-TRT parser.

I am not familiarized with this project, I just entered here because I was researching bout TRT and libtorch and thought about contributing since I have faced the same problem. For more info check this. Good luck!

Try to use torch2trt package

JVGD avatar Mar 30 '20 11:03 JVGD

could you tell me how to convert it ?thanks

You can modify the cat.py file in the converter folder of torch2trt. I can convert successfully, but the output is very different.

avnhungnh avatar Jun 04 '20 04:06 avnhungnh

I try to convert HRNet model for human pose estimation(https://github.com/leoxiaobin/deep-high-resolution-net.pytorch) to TensorRT and success, however the outputs of the unconverted model and converted model were significantly different.

can you share your code ? thanks

chh7411898 avatar Sep 10 '21 09:09 chh7411898