tflite-support
tflite-support copied to clipboard
Quantization with tflite : Unexpected input data type. Actual: (tensor(float)) , expected: (tensor(int8))
I am quantizing my yolov5 model with tflite using the following code :
import tensorflow as tf saved_model_dir ='path2_saved_model' converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir) converter.optimizations = [tf.lite.Optimize.DEFAULT] converter.representative_dataset = representative_dataset converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8] converter.inference_input_type = tf.int8 # or tf.uint8 converter.inference_output_type = tf.int8 # or tf.uint8 tflite_quant_model = converter.convert()
however when I try to run detect, it expects the input image to be int8, how can I solve this issue, thank you in advance!