TensorRT-Image-Classification
TensorRT-Image-Classification copied to clipboard
About Int8
Did you set nvinfer1::DataType as Int8 ? I set nvinfer1::DataType as Int8, did not create engine file? I think that it should use the calibtration data such as detector with TensorRT when set nvinfer1::DataType as Int8.
@guods Hi is there any error message or just the engine is not created?
Also, what's your purpose of changing the nvinfer1::DataType as Int8? To calibrate the data?
What happens if you set builder->setInt8Mode(true); ?
Run on KINT8, error message "nvinfer1::Network::addScale::163, condition:scale.type == shift.type && shift.type == power.type, error parsing layer type Scale index 3.". Run on KFLOAT mode, the program is correct.
Run on INT8 mode, the speed of classification is more faster; But in order to ensure recognition accuracy when the parameter is from fp32 to int8, it is necessary to calibrate the create engine by calibration data. NVIDIA official website has corresponding instructions.