Jaehong Kim
Jaehong Kim
It's an issue when you use tf.nn.relu instead of tf.keras.layers.ReLU. (It converted to TFLambdaOp which has a trouble with current QAT API.) Would you please use tf.keras.layers.ReLU if it's okay?
The weight input for FC op is not a weight. That's why we don't use symmetric. I think this is a corner case that TF EinsumDense convert to some TFLite...
We don't support fully recursively, but now you can apply quantize the model contains sub-model. e.g) q_base_model = quantize_model(base_model) original_inputs = tf.keras.Input(IMG_SHAPE) x = q_base_model(original_inputs) x = tf.keras.layers.GlobalAveragePooling2D()(x) output=tf.keras.layers.Dense(1)(x) model...
q_base_model is already quantized, but last line is needed to quantize outside of the q_base_model. (GAP and Dense)
I think there's two possibilities: 1. Quantize model highly recommended to use `nearest` because `bilinear` has some gap between TF and TFLite at this moment. (we quantize output of the...
It's a bug caused by interaction between SeperableConv and TFOpLamba. (tf.split) It's not a right fixes for this bug, but https://github.com/tensorflow/model-optimization/pull/825 (under review) potentially fixed this bug. You can try...
I'm not so sure it happened on TFMOT or TFLite converter Can you give us some more details or any reproduction steps?
Can you add how `CustomLayerMaxPooling1D` is implemented? If it doesn't have kernel, than you have to change the `MyMaxPooling1DQuantizeConfig` as: ``` class MyMaxPooling1DQuantizeConfig(tfmot.quantization.keras.QuantizeConfig): def get_weights_and_quantizers(self, layer): return [] # No...
The command you mentioned is just working for me. (Python 3.11.4, pip 23.2.1) Can you check pip version also? (and can you verify it also happened on the master?) thanks!
Would you please add some more details that how did you evaluate and what's the expected output? Can you try full dataset (0:60000) instead of 0:1000 for QAT? Thanks!