A way to perform activation encoding freezing with aimet_tensorflow
aimet_torch has feature to freeze activations with load_and_freeze_encodings() and it goes correctly after applying these changes: https://github.com/quic/aimet/pull/2845.
aimet_tensorflow has just set_and_freeze_param_encodings() for parameter freezing but atleast I don't know the way to freeze activations. Sometimes it is beneficial to freeze input and output activations based on normalization and used bitwidth and it should be handled correctly during QAT. It would be great feature to have also in aimet_tensorflow!
And for my purposes aimet_tensorflow's keras implementation would be what I need but of course someone else would need it for original aimet_tensorflow.
Any plans to add this feature to new realases of AIMET?
Hi @tholmb aimet tensorflow is in maintance mode and will not get any new features.
Please use https://pypi.org/project/aimet-torch/ and https://pypi.org/project/aimet-onnx for your latest workflow. Or use https://github.com/quic/aimet/releases for latest aimet-tensorflow package