[WIP] Support deploy MMRazor quantized model
Motivation
The related pr in MMRazor is https://github.com/open-mmlab/mmrazor/pull/365
MMRazor is developing quantization algorithms, including PTQ and QAT.
This PR is a draft code to deploy MMRazor quantization model in MMDeploy, mainly with the following two points.
Export FX Graph
MMRazor quantized model is FX graph, and the current function rewriter cannot handle FX graph correctly. The function rewriter has been fine-tuned in this PR, which can handle FX graph correctly.
Export Quantized ONNX
Different backends have different ONNX formats for quantized models, TensorRT and Openvion's quantized onnx exporters are implemented in this pr.
Modification
Function Rewriter
The original function rewriter is a wrapper, and the first arg is ctx.
In order to process FX Graph, wrapper is no longer used in this pr.
The original function is directly replaced by rewritten function, and ctx is removed from args.
ctx becomes a global variable.
Quantize ONNX Exporter
This pr adds a fake quant symbolic op, with which a temporary non-running onnx can be exported. Then, different backends quantize onnx exporter will convert it to final deployed onnx.
python tools/deploy.py configs/mmdet/detection/detection_openvino_dynamic-800x1344-quantize.py $RETINANET $FLOAT_CKPT demo/resources/det.jpg --show
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.
2 out of 3 committers have signed the CLA.
:white_check_mark: pppppM
:white_check_mark: grimoire
:x: humu789
humu789 seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.