models icon indicating copy to clipboard operation
models copied to clipboard

Post Training quantization of CenterNet + MobileNet model returns 0 detections

Open amogh112 opened this issue 3 years ago • 0 comments

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • [ YES] I am using the latest TensorFlow Model Garden release and TensorFlow 2.
  • [ YES] I am reporting the issue to the correct repository. (Model Garden official or research directory)
  • [ YES] I checked to make sure that this issue has not already been filed.

1. The entire URL of the file you are using

https://github.com/tensorflow/models/blob/master/research/object_detection/colab_tutorials/centernet_on_device.ipynb I modified the above notebook to make the bug reproducible https://colab.research.google.com/drive/1l-bNKCocTMtrAE4cmZuwdjomDE-nAov0?usp=sharing

2. Describe the bug

I am unable to use the same functions used for SSD MobileNet for post training quantization of Centernet+MobileNetv2. You may see and reproduce my implementation in this notebook The model returns 0 detections and arbitrary outputs like :


print(boxes, classes, scores, num_detections)

 [[[0.00756562 0.         0.01513121 0.02269682]
  [0.00756562 0.         0.01513121 0.02269682]
  [0.00756562 0.         0.01513121 0.02269682]
  [0.00756562 0.         0.01513121 0.02269682]
  [0.00756562 0.         0.01513121 0.02269682]
  [0.00756562 0.         0.01513121 0.02269682]
  [0.00756562 0.         0.01513121 0.02269682]
  [0.00756562 0.         0.01513121 0.02269682]
  [0.00756562 0.         0.01513121 0.02269682]
  [0.00756562 0.         0.01513121 0.02269682]]] 
[[0. 1. 2. 3. 4. 5. 6. 7. 8. 9.]] 
[[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]] 
[0.]

The floating point 32 and 16 models run absolutely fine. I have tried the quantization code mentioned at https://github.com/tensorflow/models/issues/10006 also. Still running into the same problem. The same function works well for SSD + MobilenetV2FPN model. Can the team kindly give a reproducible script for post training quantization for CenterNet?

3. Steps to reproduce

I have made changes to the centernet_on_mobile.ipynb to save a quantized model and run inference. Please see it here https://colab.research.google.com/drive/1l-bNKCocTMtrAE4cmZuwdjomDE-nAov0?usp=sharing To run the notebook, only one change is required(adding use_separable_conv : true inside feature_extractor {...} in config file).

4. Expected behavior

The model should output similar bounding boxes to floating point 32 model.

5. Additional context

Include any logs that would be helpful to diagnose the problem.

6. System information

Google Colab default system

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
  • Mobile device name if the issue happens on a mobile device:
  • TensorFlow installed from (source or binary): using !pip install tf-nightly
  • TensorFlow version (use command below): '2.10.0-dev20220426'
  • Python version: 3.7.13
  • Bazel version (if compiling from source):
  • GCC/Compiler version (if compiling from source):
  • CUDA/cuDNN version:
  • GPU model and memory:

amogh112 avatar Apr 26 '22 19:04 amogh112