insightface icon indicating copy to clipboard operation
insightface copied to clipboard

Export insightface models to ONNX and inference with TensorRT

Open linghu8812 opened this issue 4 years ago • 5 comments

Hello everyone, here are some scripts that can convert insightface params to onnx model. These scripts have been sorted out various methods of exporting MXNet params or insightface params on the GitHub or CSDN, and can export various models of insightface, RetinaFace, arcface, 2d106det and gender-age models are all supported. The repo address is: https://github.com/linghu8812/tensorrt_inference

supported models scripts
RetinaFace export_retinaface
arcface export_arcface
2d106det export_2d106det
gender-age export_gender-age

Export RetinaFace params to ONNX

For RetinaFace model, RetinaFace-R50, RetinaFace-MobileNet0.25 and RetinaFaceAntiCov are both supported. copy project/RetinaFace/export_onnx.py to ./detection/RetinaFace or ./detection/RetinaFaceAntiCov, with the following comman can export those models.

  • export resnet50 model
python3 export_onnx.py
  • export mobilenet 0.25 model
python3 export_onnx.py  --prefix ./model/mnet.25
  • export RetinaFaceAntiCov model
python3 export_onnx.py  --prefix ./model/mnet_cov2 --network net3l

An inference code with tensorrt has also been supplied, the following are inference results:

  • RetinaFace-R50 result

image

  • RetinaFaceAntiCov result

image

Export arcface params to ONNX

For arcface model, it has added reshape the shape of PRelu params, so the exported PRelu node structure is shown as following:

image

Export gender-age and 2d106det params to ONNX

The following is a TensorRT result for 2d106det model, now it's run alone, not with retinaface

image

linghu8812 avatar Dec 04 '20 16:12 linghu8812

Hi! Great work @linghu8812! I've got a bit overlapping project, though more focused on deployment of TensorRT face recognition pipeline over REST API. I hope it won't be too rude to share it here too, instead of spamming issues: InsightFace-REST

SthPhoenix avatar Dec 08 '20 22:12 SthPhoenix

Hi, I have successfully converted the model to onnx and was able to load it. But when I tried to use it for inference, it throws a runtime error:

Traceback (most recent call last): File "predict_generate_detections_for_map.py", line 181, in detect_face(img, op_filename) File "predict_generate_detections_for_map.py", line 144, in detect_face confidences, boxes = ort_session.run(None, {input_name: img}) File "/home/gpumachine/shamila/env/lib/python3.7/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 124, in run return self._sess.run(output_names, input_feed, run_options) onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Resize node. Name:'ssh_c3_up' Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/tensor/upsample.cc:1036 onnxruntime::common::Status onnxruntime::Upsample<T>::Compute(onnxruntime::OpKernelContext*) const [with T = float] sizes != nullptr && sizes->Shape().Size() != 0 was false. Either scales or sizes MUST be provided as input.

jijo20 avatar Feb 03 '21 12:02 jijo20

https://github.com/linghu8812/tensorrt_inference/issues/12#issuecomment-768060742

this might help

linghu8812 avatar Feb 04 '21 00:02 linghu8812

Where can I download mnet_cov2? Thank you

caeruleum00 avatar Jun 01 '21 08:06 caeruleum00

@linghu8812 I have an arcface r100 which is identical to the insightface r100 arcface model, with the exception that the output feature vector is 500 elements instead of 512. When I use your conversion script, the onnx model always produces the same output regardless of input. Do I need to adapt it somehow to make it work?

cyrusbehr avatar Apr 06 '23 21:04 cyrusbehr