tensorrtx icon indicating copy to clipboard operation
tensorrtx copied to clipboard

RetinaFace, calibration int8, tensorRT8.6.1 error [pluginV2Runner.cpp::execute::265] Error Code 2: Internal Error (Assertion status == kSTATUS_SCUESS failed. )

Open ohadjerci opened this issue 3 months ago • 7 comments

Env

  • docker nvcr.io/nvidia/tensorrt:24.01-py3
  • GPU, GeForce RTX 2060
  • OS, Ubuntu18.04
  • Cuda 12.0
  • TensorRT 8.6.1.6-1

About this repo

repo wang-xinyu/tensorrtx/retinaface model retinaface

Hello,

The FP16 engine is working but with less performance and with some warning for TR 8.6.1.6

  • [W] [TRT] - 100 weights are affected by this issue: Detected subnormal FP16 values.
  • [W] [TRT] - 73 weights are affected by this issue: Detected values less than smallest positive FP16 subnormal value and converted them to the FP16 minimum subnormalized value.

In the other hand, calibration int8 with tr8.6.1.6 leads to errors :

  • [E] [TRT] 2: Assertion scales.size() == 1 failed.
  • [E] [TRT] 2: [pluginV2Runner.cpp::getInputHostScale::88] Error Code 2: Internal Error (Assertion scales.size() == 1 failed. )

Firstly, i tried to generate an engine with a trained model (half precision). Secondly, i reorganize the code like yolov9/v7.. but without sucess.

I’m hoping someone can tell me more about this error msg, or point me to documents that can explain it. Does it mean that the building process failed due while processing a plugin or the scale of the interpolation ?

Any suggestion is highly appreciated. Thanks in advance.

ohadjerci avatar Mar 14 '24 21:03 ohadjerci