onnx-tensorrt
onnx-tensorrt copied to clipboard
Model inference result not same between Tensorrt and ONNXRuntime
I have a model like this: https://drive.google.com/file/d/1ND10l2SFG35VBHiOcKFcBI_iWw0cQXNL/view?usp=sharing
this model simply take an image input (without preprocess, only need convert HWC -> CHW and unsqueeze), it outputs dets, labels, masks which is instance segmentation result.
But strangely, the model inference via ONNXruntime with correct result, but with TensorRT (both py and cc) got wrong result on scores.
The scores is the last dimension of dets, first 4 are box.
The scores all below 0.2~, but for some very clear objects, it should get bigger sore like 0.8 or 0.9, For same image, ONNXruntime get scores normally, with 0.8+ scores for clear objects.
But trt gives me wrong result.
I don't know which op caused this problem, but it definitely caused by TensorRT not ONNX it self since ONNXRuntime give right result.
PTAL.
After checking your model with polygraphy, Conv_161 and Conv_157 's output seem problematic in TRT, will update here if I have some discoveries.