torch2trt
torch2trt copied to clipboard
QAT : TRT 8 compatible workflow
Hi @jaybdub
I am introducing this new QAT workflow which is compatible with TensorRT 8.
TRT introduced IQuantize and IDequantize Layers which are to be manually placed in the network based on the guidelines mentioned in Q/DQ placement.
I have added support to quantize nn.Conv2d
, nn.MaxPool2d
and nn.AdaptiveAvfPool2d
- layers that are necessary to quantize Resnet(s). I have also added a QuantGenericTensor
which can be used to add QDQ layer anywhere in the model based on Nvidia's guidelines.
This PR also introduces the option to choose between per tensor quantization and per channel quantization. All quant layers are scriptable with torch.jit.script
Most of the files that I have modified / changed are under contrib
folders, so it doesn't affect the main torch2trt library.
I will continue to add support for more layers but I believe this PR is big enough to land and then I can put up smaller PRs to add more functionalities.
Entire workflow is tested with Pytorch NGC Container 22.04-py3
Thanks.