model-optimization icon indicating copy to clipboard operation
model-optimization copied to clipboard

Set zero_debias=True for Quantization Aware Training

Open smohan10 opened this issue 3 years ago • 3 comments

Hi, I see that TF version 1's QAT library uses zero_debias as True. However, the TFMOT's quantization library calls the following in the quant_ops.py:

assign_min = moving_averages.assign_moving_average( min_var, range_min, ema_decay, zero_debias=False, name='AssignMinEma') assign_max = moving_averages.assign_moving_average( max_var, range_max, ema_decay, zero_debias=False, name='AssignMaxEma')

When I set the zero_debias to be True, I hit this error:

Tensor.op is meaningless when eager execution is enabled.

Do you have plans in the future to support zero_debias=True for QAT ?

smohan10 avatar Oct 08 '21 01:10 smohan10

@Xhark could you take a look at this one?

abattery avatar Oct 12 '21 04:10 abattery

Would you please provide more details? Did you try to use TF1 QAT on TF1 model but it raise that error?

Xhark avatar Oct 12 '21 09:10 Xhark

Sure. Let's take the case of TF1 QAT. The moving_averages.assign_moving_average() method in quant_ops.py in TF1 QAT has zero_debias flag set as True by default. QAT works as expected.

Now, switch to TF2 QAT. The zero_debias flag in TFMOT in the moving_averages.assign_moving_average() method in quant_ops.py in TF2 QAT is set to False.

This behavior is different than TF1 QAT. To reproduce the same behavior between TF1 and TF2, I set zero_debias in TFMOT to be True and I hit this error: Tensor.op is meaningless when eager execution is enabled.

Can the TFMOT library for TF2 QAT support zero_debias=True ?

Let me know if you need additional information!

smohan10 avatar Oct 12 '21 18:10 smohan10