model-optimization icon indicating copy to clipboard operation
model-optimization copied to clipboard

QAT for object detection models?

Open tensorbuffer opened this issue 4 years ago • 4 comments

System information

  • TensorFlow version (you are using):2.6
  • Are you willing to contribute it (Yes/No):

Motivation

Need to run OD models on device, after QAT.

Describe the feature

Currently QAT only supports sequential and functional model, as stated in https://blog.tensorflow.org/2020/04/quantization-aware-training-with-tensorflow-model-optimization-toolkit.html I looked into OD models and they are subclassed from tf.keras.layer. Even the backbone (e.g. feature extractor) is not a functional model (model._is_graph_network is False).

I used quantize_model() and it throws out error since it's not a keras model.

Describe how the feature helps achieve the use case

Describe how existing APIs don't satisfy your use case (optional if obvious)

As examples:

  1. You tried using APIs X and Y and were able to do Z. However, that was not sufficient because of ...

  2. You achieved your use case with the code snippet W. However, this was more difficult than it should be because of ... (e.g. ran into issue X or had to do Y).

tensorbuffer avatar Sep 22 '21 16:09 tensorbuffer

Hi,

could you share more info about the OD model? I am confused whether the model is keras model or not. You said that it is not a keras model, but the layers are keras layer and _is_graph_network is for tf.keras.Model.

If the model is keras model, you may be able to utilize quantization annotation, see examples in https://www.tensorflow.org/model_optimization/api_docs/python/tfmot/quantization/keras/quantize_annotate_model

rino20 avatar Sep 27 '21 14:09 rino20

I did this according to OD's tutorial: #https://github.com/tensorflow/models/blob/master/research/object_detection/colab_tutorials/eager_few_shot_od_training_tf2_colab.ipynb

Next is I want to do QAT of the model. If I do like this: quantize_model = tfmot.quantization.keras.quantize_model q_aware_model = quantize_model(model)

I will get an error, which comes from: https://github.com/tensorflow/model-optimization/blob/570444d4b9afb56e91992e8f5ae61abb12f4384f/tensorflow_model_optimization/python/core/quantization/keras/quantize.py#L124 ValueError: to_quantize can only be a tf.keras.Model instance. Use the quantize_annotate_layer API to handle individual layers.You passed an instance of type: SSDMetaArch.

BTW the next check also would fail ('SSDMetaArch' object has no attribute '_is_graph_network') https://github.com/tensorflow/model-optimization/blob/570444d4b9afb56e91992e8f5ae61abb12f4384f/tensorflow_model_optimization/python/core/quantization/keras/quantize.py#L133

The SSDMetaArch is defined in https://github.com/tensorflow/models/blob/master/research/object_detection/meta_architectures/ssd_meta_arch.py#L254 class SSDMetaArch(model.DetectionModel)

While the model.DetectionModel is defined in https://github.com/tensorflow/models/blob/master/research/object_detection/core/model.py#L76 _BaseClass = tf.keras.layers.Layer class DetectionModel(six.with_metaclass(abc.ABCMeta, _BaseClass))

So here you can see the detection model is derived from Layer class, not model. I haven't tried to use quantize_annotate_layer() to quantize the OD model, this is kind of strange, I want to QAT the whole model which is derived from layer. And since the OD model contains a base backbone model (e.g. feature extractor) inside, I guess I will have error "Quantizing a tf.keras Model inside another tf.keras Model is not supported."

Looks like the TFMOT team and OD team have a big disconnect in QAT of OD models. There are similar issues filed in OD side: https://github.com/tensorflow/models/issues/8935 https://github.com/tensorflow/models/issues/9835 This is greatly impacting our use of tensorflow, we are looking at other alternatives, since there's no expected date when this would be fixed.. I think TFMOT team would own these issues because in TF1, QAT of OD is very simple from OD side, just enable it in its config pipeline file.

tensorbuffer avatar Sep 27 '21 15:09 tensorbuffer

Hi,

Any improvement on this issues? I am still stuck on QAT on OD model using TensorFlow version 2.6! https://github.com/tensorflow/models/blob/master/official/projects/qat/vision/README.md

cpadeiro avatar Oct 25 '22 10:10 cpadeiro

Hi, it is not officially available yet, but you can find some OD qat configs in https://github.com/tensorflow/models/tree/master/official/projects/qat/vision/configs/experiments/retinanet

rino20 avatar Oct 26 '22 08:10 rino20