TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi icon indicating copy to clipboard operation
TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi copied to clipboard

Sense about input tensor different to output tensor

Open Petros626 opened this issue 2 years ago • 0 comments
trafficstars

Hey,

I reviewed you guide to train a TF2 object detection model and wonder why you're using two different datatypes for the final inference.

First you stick with int8 and then you declare uint8 for the input_tensor. The second weird thing is, the final use of float32. I assume the model get feeded with uint8 (should be int8) for faster inference and the output_tensor should be as accurate as possible? Didn't you build a bottleneck with that?

Logically would be using int8 for the input_tensor and the output_tensor or not.

# For full integer quantization, though supported types defaults to int8 only, we explicitly declare it for clarity.
converter.target_spec.supported_types = [tf.int8]
# These set the input tensors to uint8 and output tensors to float32
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.float32

Petros626 avatar Jul 19 '23 17:07 Petros626