Model with SeparableConvs not converting to QAT
Prior to filing: check that this should be a bug instead of a feature request. Everything supported, including the compatible versions of TensorFlow, is listed in the overview page of each technique. For example, the overview page of quantization-aware training is here. An issue for anything not supported should be a feature request.
Describe the bug The model with SeparableConv2D is not getting converted to QAT model with quantize_apply function
System information Linux Ubuntu 20.04 LTS, Python version: 3.8, CUDA/cuDNN version: 11.4, GPU model and memory: NVIDIA GeForce RTX 2060, 6144 MB
TensorFlow version (installed from source or binary): tf-nightly
TensorFlow Model Optimization version (installed from source or binary): 0.6.0
Python version: 3.8
Describe the expected behavior The model should get converted to a QAT model
Describe the current behavior The model doesn't get converted to a QAT model
Code to reproduce the issue It can be found in this colab
Screenshots If applicable, add screenshots to help explain your problem.
Additional context I need my model to work. It's a critical issue.
@tensorflowbutler When am I supposed to get a response on this?
It's a bug caused by interaction between SeperableConv and TFOpLamba. (tf.split) It's not a right fixes for this bug, but https://github.com/tensorflow/model-optimization/pull/825 (under review) potentially fixed this bug.
You can try this fix if you use the command as below: !pip install git+https://github.com/tensorflow/model-optimization.git@fd5bc4a9202642c7d5536f0542f65cf09cae4713
Thanks!
@Xhark The only thing this patch solves is saving the model. The inference times and the output are all much worse for regular models when compared to the same models saved with tf-nightly.
@tensorflowbutler
It's been a month. I depend on this feature. When is it going to be resolved!