Sangeetha
Sangeetha
Hi @Centient - Apologies for the delayed repsonse. Thank you for your interest. Could you please share the config you used to achieve 63.21% accuracy. Perhaps we could review that...
@Centient @PaulZhangIsing Models are made available on AIMET Model Zoo : https://github.com/quic/aimet-model-zoo. Could you please take a look at this for more information on : PyTorch : https://github.com/quic/aimet-model-zoo/tree/develop/zoo_torch/ and TensorFlow:...
@LLNLanLeN Thank you for your query and for being an active user of AIMET. Yes, we do plan to add support for quantization of transformers. Please do watch this space...
@LLNLanLeN it's under works at the moment. =). You could potentially give it a try with Hugging face PyTorch models starting with BERT uncased model (https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/modeling_bert.py). Please note that the...
Hi @17818587795 Were you trying this with PyTorch or TF model? Here's an example config to be added to enable per channel quantization on a given PyTorch Model : https://github.com/quic/aimet/blob/a0c7b3bb88cad769dc8130bf3a87a6c86128ba69/TrainingExtensions/torch/test/python/test_per_channel_quantization.py#L788.
Hi @TingfengTang Thank you for your message. Agree that applying model guidelines could get tedious. Yes, we are actively working on adding support for torch.fx in future releases (example :...
Hi @YouYueHuang Just wanted to follow-up on this. Do you have any code ready that needs a review or have further questions for us. Please let us know if you...
Hi @sohils Thank you for your query. Yes, the solution you have suggested is the way to work around this. AIMET does support a subset of these definitions that one...
@leo19850812 Thanks for reporting the issue. @quic-bharathr could you please take a look.
@bwery Thank you for reporting your observations. @quic-bharathr could you please take a look at this.