super-gradients icon indicating copy to clipboard operation
super-gradients copied to clipboard

How to initialize and use YOLO-NAS quantized models as listed in docs?

Open veb-101 opened this issue 1 year ago • 4 comments

For YOLO-NAS FP16 model, is it simply?

yolo_nas_s = super_gradients.training.models.get("yolo_nas_s", pretrained_weights="coco").to(torch.half)

What's the procedure to load/initialize YOLO-NASINT8 quantized model?

Do we need to perform some image preprocessing to use them?

veb-101 avatar May 04 '23 18:05 veb-101

Hello, @veb-101 , I refer you to this document. Let me know if this helps!

NatanBagrov avatar May 05 '23 07:05 NatanBagrov

The doc contains details about how to do QAT and PQT, but it doesn't mention the procedure to load the then-trained model for inference.

Specifically, I want to know how to load and use the quantized YOLO-NAS FP16 and INT8 models listed in the README documentation.

The following line loads the model in FP32 precision.

yolo_nas_s = super_gradients.training.models.get("yolo_nas_s", pretrained_weights="coco")

What are the next steps I need to perform to load and use the quantized version (FP16 and INT8) as mentioned in the docs?

Edit: Removed question about QSP and QCI blocks

veb-101 avatar May 05 '23 07:05 veb-101

To load the model for inference you should compile it to TensorRT. The INT8 QAT model is convertible to an ONNX with Q/DQ layers which are then compiled into actual INT8 quantization.

You can follow the code of QATTrainer to see how the FP32 model is being quantized and calibrated, then exported to ONNX.

To get INT8 model, as I mentioned, you need to compile it to TensorRT, which can be achieved using trtexec. Here's a snippet:

/usr/src/tensorrt/bin/trtexec --onnx=$1.onnx --workspace=2048 --avgRuns=100 --duration=15 --int8 --fp16 --saveEngine=$1.engine

Read more about TensorRT in the NVIDIA's docs.

NatanBagrov avatar May 05 '23 14:05 NatanBagrov