pytorch-ssd
pytorch-ssd copied to clipboard
tensorRT
Hi, can this onnx model be converted to .trt model ?
Hello @yanlongbinluck were you successful in figuring this out?
mobilenetv1 could be converted to trt via torch2trt without postprocessing part (decoding boxes etc) mobilenetv2 could be converted to trt via onnx-tensorrt without postprocessing too.
Thank you @mstrfx
mobilenetv1 could be converted to trt via torch2trt without postprocessing part (decoding boxes etc) mobilenetv2 could be converted to trt via onnx-tensorrt without postprocessing too.
Hi @mstrfx trying to convert SSD mobilenetv2 to tensort rt throws this error:
I am using tensorrt 5.1, could you please tell me which version did you try?
I am using tensorrt 5.1, could you please tell me which version did you try?
Hey @tmralmeida, I used last one - 7.0 and 7.1. Default onnx trt parser sometimes has issues with slice and concat operators, I recommend using onnx-tensorrt (you could compile it without installing python libs and convert onnx model to engine) for getting inference engine.
Or you could maybe try to cut ur model in pytorch, dump to onnx and try to convert again just to see which layer fails.
Thank your for your answer @mstrfx! I want to deploy the model in a Jetson AGX Xavier device, which only has TensorRT 5.1. I'm trying to do that through jetson-inference library. It is cumbersome to update the version of TensorRT in a Jetson device because it has to be flashed. What bothers me in your advice is: if I could obtain the engine by doing what you are proposing, then, the engine file won't be recognized (in the jetson device) because was created in a more recent version of trt. I hope I've been enlightening enough and thank you for your answer!
Were you successful in converting mobilenet v2 into an onnx file?
@mstrfx After converting to an .onnx file do you know any resources to use to help parse the output. I understand the output will be in this format: output_names=['scores', 'boxes']
But, I am not exactly sure how to filter them out.
I have made a mobilenetv1-ssd-tensorrt project based on this project. https://github.com/tjuskyzhang/mobilenetv1-ssd-tensorrt
That one had no problems at that date. Anyway, thanks @tjuskyzhang
I have made a mobilenetv1-ssd-tensorrt project based on this project. https://github.com/tjuskyzhang/mobilenetv1-ssd-tensorrt qfgaohao/pytorch-ssd 请问, 为什么我用 mb1-ssd 训练,得到的pytorch 模型,转换wts, 是成功的,但是应用生成 Engine file 出现一些层不支持,不知道是我训练模型时候没有固化一些层还是什么原因