model-optimization
model-optimization copied to clipboard
Add batch norm to default_n_bit_quantize_registry and default_8_bit_quantize_registry
- TensorFlow version (you are using): 2.13
Motivation There are many models which use batch norm in places where it is not covered by the existing cases allowed in the registry. Adding batch norm to the registry would allow users to apply QAT to such models.
At present I am editing both registries by adding the line
_QuantizeInfo(layers.BatchNormalization, ['gamma'], [], True),
which works for my case, but it would be nice to have a more general fix
@Xhark Could you take a look at this issue? Thank you! :)
Any update on this?