pytorch-ssd icon indicating copy to clipboard operation
pytorch-ssd copied to clipboard

tensorRT

Open yanlongbinluck opened this issue 4 years ago • 11 comments

Hi, can this onnx model be converted to .trt model ?

yanlongbinluck avatar Mar 25 '20 13:03 yanlongbinluck

Hello @yanlongbinluck were you successful in figuring this out?

ishang3 avatar Apr 13 '20 17:04 ishang3

mobilenetv1 could be converted to trt via torch2trt without postprocessing part (decoding boxes etc) mobilenetv2 could be converted to trt via onnx-tensorrt without postprocessing too.

mstrfx avatar Apr 20 '20 08:04 mstrfx

Thank you @mstrfx

ishangupta3 avatar May 01 '20 23:05 ishangupta3

mobilenetv1 could be converted to trt via torch2trt without postprocessing part (decoding boxes etc) mobilenetv2 could be converted to trt via onnx-tensorrt without postprocessing too.

Hi @mstrfx trying to convert SSD mobilenetv2 to tensort rt throws this error:

err

I am using tensorrt 5.1, could you please tell me which version did you try?

tmralmeida avatar May 27 '20 15:05 tmralmeida

I am using tensorrt 5.1, could you please tell me which version did you try?

Hey @tmralmeida, I used last one - 7.0 and 7.1. Default onnx trt parser sometimes has issues with slice and concat operators, I recommend using onnx-tensorrt (you could compile it without installing python libs and convert onnx model to engine) for getting inference engine.

Or you could maybe try to cut ur model in pytorch, dump to onnx and try to convert again just to see which layer fails.

mstrfx avatar May 27 '20 15:05 mstrfx

Thank your for your answer @mstrfx! I want to deploy the model in a Jetson AGX Xavier device, which only has TensorRT 5.1. I'm trying to do that through jetson-inference library. It is cumbersome to update the version of TensorRT in a Jetson device because it has to be flashed. What bothers me in your advice is: if I could obtain the engine by doing what you are proposing, then, the engine file won't be recognized (in the jetson device) because was created in a more recent version of trt. I hope I've been enlightening enough and thank you for your answer!

tmralmeida avatar May 27 '20 19:05 tmralmeida

Were you successful in converting mobilenet v2 into an onnx file?

ishang3 avatar Jun 08 '20 18:06 ishang3

@mstrfx After converting to an .onnx file do you know any resources to use to help parse the output. I understand the output will be in this format: output_names=['scores', 'boxes']

But, I am not exactly sure how to filter them out.

ishang3 avatar Jun 11 '20 00:06 ishang3

I have made a mobilenetv1-ssd-tensorrt project based on this project. https://github.com/tjuskyzhang/mobilenetv1-ssd-tensorrt

tjuskyzhang avatar Nov 10 '20 00:11 tjuskyzhang

That one had no problems at that date. Anyway, thanks @tjuskyzhang

tmralmeida avatar Nov 10 '20 08:11 tmralmeida

I have made a mobilenetv1-ssd-tensorrt project based on this project. https://github.com/tjuskyzhang/mobilenetv1-ssd-tensorrt qfgaohao/pytorch-ssd 请问, 为什么我用 mb1-ssd 训练,得到的pytorch 模型,转换wts, 是成功的,但是应用生成 Engine file 出现一些层不支持,不知道是我训练模型时候没有固化一些层还是什么原因

xm0629 avatar Jan 10 '21 13:01 xm0629