tensorrt_inference
tensorrt_inference copied to clipboard
INT8 problem
Could we perform ONNX model with INT8 mode? I defined a calibrator when I used layer API to build the model and it works well. But I use the same calibrator on onnx model it failed to output right result. Is it the same with building the model layer by layer? -yolov5