tensorflow-yolov4-tflite
tensorflow-yolov4-tflite copied to clipboard
Duplicated quantization flag assignment makes impossible to create a fully int8 quantized model
https://github.com/hunglc007/tensorflow-yolov4-tflite/blob/9f16748aa3f45ff240608da4bd9b1216a29127f5/convert_tflite.py#L39-L41
The second assignment to converter.target_spec.supported_ops supersedes the first that contains the int8 flag for full integer quantization