nanodet
nanodet copied to clipboard
TensorRT inference demo
Hello everyone, here is a tensorrt inference demo for nanodet: https://github.com/linghu8812/tensorrt_inference/tree/master/project/nanodet.
First of all, when I export onnx model, I add softmax and concat layer to onnx, so the end of onnx model looks like this:
In this way, this will increase the inference time of the model, but it will reduce the postprocessing time. Considering comprehensively, the total processing time has been reduced, so I choose this way to export the onnx model.
In addition, the onnxsim module has been imported when export onnx, so the model exported has been simplified.
from onnxsim import simplify
onnx_model = onnx.load(output_path) # load onnx model
model_simp, check = simplify(onnx_model)
assert check, "Simplified ONNX model could not be validated"
onnx.save(model_simp, output_path)
print('finished exporting onnx ')
At last, the TensorRT inference result has shown below:
for more information, please refer: https://github.com/linghu8812/tensorrt_inference
How to export nanodet onnx model with softmax and concat? I used nanodet_m.ckpt, and export-onnx.py from https://github.com/linghu8812/tensorrt_inference, but onnx model is still like this:
@yueyihua Use https://github.com/linghu8812/nanodet to export the model.
@linghu8812 why the output is 1210084 ,how to get 2100 and 84?
doesn't work either even using the export_onnx.py as you mentioned @imneonizer
NVM, I figured i have to run python setup.py install
again with that forked repo