distiller icon indicating copy to clipboard operation
distiller copied to clipboard

Quantization Capabilities in PyTorch

Open nik13 opened this issue 5 years ago • 1 comments

Hi Team,

Thanks for the great tool for model compression research.

As it is stated that you guys are planning to add PTQ or capability to export the quantized model to ONNX, would be greatly interested in knowing if that's yet in the pipeline, or it's better to consider PyTorch internal quantization capabilities or using TensorRT for the same.

Thanks!

nik13 avatar Jun 16 '20 21:06 nik13

A formal method or pipeline to export models after PTQ would be awesome

shazib-summar avatar Jun 18 '20 08:06 shazib-summar