nlp-architect icon indicating copy to clipboard operation
nlp-architect copied to clipboard

question: [Quantization] Which files to change to make inference faster for Q8BERT?

Open sarthaklangde opened this issue 4 years ago • 1 comments

I know from previous issues it is mentioned that that Q8BERT was just an experiment to measure the accuracy of quantized BERT model. But, given that the accuracy is good, what changed would need to be made to torch.nn.quantization file to replace the FP32 operations by INT8 operations?

Replacing the FP32 Linear layers with the torch.nn.quantized.Linear should theoretically work since it will have optimized operations, but it doesn't. Same for other layers.

If someone could just point out how to improve the inference speed (hints, tips, directions, code, anything), it would be helpful since the model's accuracy is really good and I would like to use it for downstream tasks. I don't mind even creating a PR once those changes are done so that it merges with the main repo.

Thank you!

sarthaklangde avatar May 18 '21 05:05 sarthaklangde

Do you find the answer for this?

Ayyub29 avatar Aug 25 '22 03:08 Ayyub29