RAPiD icon indicating copy to clipboard operation
RAPiD copied to clipboard

Export model to tensorrt

Open MHerique opened this issue 3 years ago • 6 comments

Hi!

I would like to know if you have already tried to convert RAPID pytorch model into tensorrt model with success?

I'm trying to do it but it's look a bite difficult. I pass through onnx model but passing from onnx to tensorrt is a bite difficult.

thanks

MHerique avatar Jun 29 '21 14:06 MHerique

Hello, thank you for your interest.

Unfortunately, we did not try it, and we are not planning to do it. We are moving on to a new project and probably will not develop RAPiD for exporting to ONNX or TensorRT. Sorry for the inconvenience!

That said, if there are more people expecting this feature, I will take some time to learn and implement the TensorRT export.

duanzhiihao avatar Jun 29 '21 15:06 duanzhiihao

I posted an issue earlier and came across this one. I plan to extend the model to allow ONNX and TensorRT support but I am waiting on a non-prohibitive license to be put into the repo as I'm hesitant to make any modifications otherwise.

stevebottos avatar Jul 01 '21 22:07 stevebottos

Hi, I add a script that converts our model to ONNX. You can choose the pre-trained weights and input shape upon your preference. https://github.com/duanzhiihao/RAPiD/blob/master/export_onnx.py

However, I didn't check if the ONNX model output is correct comparing with the original Pytorch model.

I also managed to convert the ONNX model to TENSORRT without errors by following the TENSORRT docs. That is, simply trtexec --onnx=rapid.onnx --saveEngine=rapid.trt --explicitBatch after we have the ONNX model, which I named rapid.onnx here.

Let me know if there are any problems.

duanzhiihao avatar Jul 10 '21 12:07 duanzhiihao

@duanzhiihao thanks for sharing the file but when i am inference onnx model i am not getting any outputs at all i have shared the image below any thoughts on these ONNX w

abhigoku10 avatar Oct 12 '21 04:10 abhigoku10

It may be a bug in the ONNX version since I didn't test the ONNX model.

It seems that the predictions' scale is different from the image as the predictions concentrate at the top-left corner. Did you resize the image after input it into the model?

Also, it would be helpful if you provide the code that you produce this image.

duanzhiihao avatar Oct 12 '21 15:10 duanzhiihao

Regarding the bug in the ONNX model inference, I am getting the same results as [abhigoku10]. Is there any upgrade or clues on how to resolve this issue?

I've just used the same input, without resizing at all, maybe is something related to rapid_export.py module?

Thanks

deepsrm avatar Sep 29 '23 15:09 deepsrm