nanodet icon indicating copy to clipboard operation
nanodet copied to clipboard

TensorRT inference demo

Open linghu8812 opened this issue 4 years ago • 5 comments

Hello everyone, here is a tensorrt inference demo for nanodet: https://github.com/linghu8812/tensorrt_inference/tree/master/project/nanodet.

First of all, when I export onnx model, I add softmax and concat layer to onnx, so the end of onnx model looks like this: image In this way, this will increase the inference time of the model, but it will reduce the postprocessing time. Considering comprehensively, the total processing time has been reduced, so I choose this way to export the onnx model.

In addition, the onnxsim module has been imported when export onnx, so the model exported has been simplified.

from onnxsim import simplify

onnx_model = onnx.load(output_path)  # load onnx model
model_simp, check = simplify(onnx_model)
assert check, "Simplified ONNX model could not be validated"
onnx.save(model_simp, output_path)
print('finished exporting onnx ')

At last, the TensorRT inference result has shown below: image

for more information, please refer: https://github.com/linghu8812/tensorrt_inference

linghu8812 avatar Dec 08 '20 15:12 linghu8812

How to export nanodet onnx model with softmax and concat? I used nanodet_m.ckpt, and export-onnx.py from https://github.com/linghu8812/tensorrt_inference, but onnx model is still like this: image

yueyihua avatar Apr 17 '21 06:04 yueyihua

@yueyihua Use https://github.com/linghu8812/nanodet to export the model.

imneonizer avatar Jul 09 '21 05:07 imneonizer

@linghu8812 why the output is 1210084 ,how to get 2100 and 84?

tomjeans avatar Dec 01 '21 10:12 tomjeans

doesn't work either even using the export_onnx.py as you mentioned @imneonizer

ysyyork avatar Dec 07 '21 12:12 ysyyork

NVM, I figured i have to run python setup.py install again with that forked repo

ysyyork avatar Dec 07 '21 12:12 ysyyork