MobileNetv2-SSD
MobileNetv2-SSD copied to clipboard
Implementation for quantized training
Hello,
I would like to know if you have implemented an implementation for quantised training of the specific model: ssd mobilenet v2
in your Object Detection API (https://github.com/tensorflow/models/tree/master/research/object_detection) like the TensorFlow team. From my research in the code it doesn't look like it, it is trained with float32
values. As we know, direct quantised training is better than quantising the model afterwards.
So the question would be how to implement this, are there already ready functions to quantise the model during training? I would appreciate a detailed and insightful answer.