tensorflow-yolov4-tflite
tensorflow-yolov4-tflite copied to clipboard
yolov4 quantize float16
is it important to quantize yolo4 into either float16 or int8 ? what is the difference between them ? and when I run the command ( python convert_tflite.py --weights ./checkpoints/yolov4-416 --output ./checkpoints/yolov4-416-fp16.tflite --quantize_mode float16 ) I got this error : OSError: SavedModel file does not exist at: ./checkpoints/yolov4-416/{saved_model.pbtxt|saved_model.pb}
- You don't necessarily have to quantize the model. It just makes the inference faster. To understand the difference, see tensorflow post-training quantization doc
- Model doesn't exist error is due to the path that you are giving don't contain
saved_model.pb, which is generated bypython save_model.py --weights ./data/yolov4.weights --output ./checkpoints/yolov4-416 --input_size 416 --model yolov4 --framework tflite