PaddleSeg
PaddleSeg copied to clipboard
paddle3.0rc1 qat_train 后导出 onnx int8 模型
问题确认 Search before asking
- [x] 我已经搜索过问题,但是没有找到解答。I have searched the question and found no related answer.
请提出你的问题 Please ask your question
量化 训练后 导出的 onnx模型与量化前大小 一致, 速度一样, 如何导出 int8 格式的量化 模型?
量化 训练脚本 :
python deploy/slim/quant/qat_train.py
--config contrib/PP-HumanSeg/configs/pphumanseg_hu_stdc1.yml
--model_path output_humansegv2_512x288_stdc1_3/best_model/model.pdparams
--learning_rate 0.0005
--do_eval
--use_vdl
--save_interval 1000
--batch_size 128
--iters 10940
--num_workers 16
--save_dir output_quant
量化 导出 :
python deploy/slim/quant/qat_export.py
--config contrib/PP-HumanSeg/configs/pphumanseg_hu_stdc1.yml
--model_path output_quant/best_model/model.pdparams
--save_dir output_quant_infer
导出 onnx:
def export_onnx(args):
cfg = Config(args.config)
#cfg.check_sync_info()
#model = cfg.model
builder = SegBuilder(cfg)
model = builder.model
if args.model_path is not None:
utils.load_entire_model(model, args.model_path)
logger.info('Loaded trained params of model successfully')
model.eval()
if args.print_model:
print(model)
input_shape = [1, 3, args.height, args.width]
print("input shape:", input_shape)
input_data = np.random.random(input_shape).astype('float32')
model_name = os.path.basename(args.config).split(".")[0]
paddle_out = run_paddle(model, input_data)
print("out shape:", paddle_out.shape)
print("The paddle model has been predicted by PaddlePaddle.\n")
input_spec = paddle.static.InputSpec(input_shape, 'float32', 'x') # 希望保存输入输出为 float32
onnx_model_path = os.path.join(args.save_dir, model_name + "_model")
paddle.onnx.export(
model, onnx_model_path, input_spec=[input_spec], opset_version=11)
print("Completed export onnx model.\n")