model-optimization icon indicating copy to clipboard operation
model-optimization copied to clipboard

Custom_Layer Quantization with custom training (QAT)

Open ManishwarG opened this issue 2 years ago • 2 comments

Describe the bug Unable to quantize the custom layer to Int8 even after quantization.

System information

TensorFlow version (installed from source or binary): 2.15.0-dev20230814

TensorFlow Model Optimization version (installed from source or binary): 0.7.5

Python version: 3.10.12

Describe the expected behavior Train a model which contains custom layer and export a quantise only the layer to int8 version for later to implement on an FPGA accelerator.

Describe the current behavior The custom layer which is supposed to be quantized always exports the un-quantized weights. If I change the tf.lite.OpsSet.TFLITE_BUILTINS to tf.lite.OpsSet.TFLITE_BUILTINS_INT8, the layer is getting quantized but the accuracy of the model is dropping from 99% to 9%. But I followed the QAT guid from as mentioned in the official website and the link to the colab notebook is provided below along with the custom layer code.

Code to reproduce the issue Code

Additional context I have used quantize_config while applying the quantization and passed the necessary elements through the scope. I have used tf.lite.OpsSet.SELECT_TF_OPS to enable tf.Extract_images through quantization. Adder_Layer.txt

ManishwarG avatar Aug 14 '23 11:08 ManishwarG

@Xhark Could you help debug / triage?

dansuh17 avatar Aug 15 '23 23:08 dansuh17

Can I get any response or any update ?

ManishwarG avatar Aug 26 '23 12:08 ManishwarG