FastDeploy icon indicating copy to clipboard operation
FastDeploy copied to clipboard

using ppyolo_tiny in c++ with onnxruntime backend

Open saeedkhanehgir opened this issue 1 year ago • 6 comments

Hi Thanks for sharing this project. I trained ppyolo_tiny with this repo. then I converted trained model to ONNX with this link in during converting model to inference model and ONNX model I set TestReader.inputs_def.image_shape=[3,416,416] . after that, I built FastDeploy with the modified bellow scripts. ppyolo.cc.txt ppyolo.h.txt ppyoloe.cc.txt ppyoloe.h.txt

infer_ppyolo.cc.txt infer_ppyolo.cc compiled without problem but when I used it, I got the below error.

[ERROR] fastdeploy/vision/detection/ppdet/ppyoloe.cc(68)::BuildPreprocessPipelineFromConfig	Failed to load yaml file , maybe you should check this file.
[ERROR] fastdeploy/vision/detection/ppdet/ppyolo.cc(37)::Initialize	Failed to build preprocess pipeline from configuration file.
Failed to initialize.

saeedkhanehgir avatar Nov 08 '22 12:11 saeedkhanehgir

Hi, @saeedkhanehgir

Here is a export model of PicoDet(Also exported from PaddleDetection) https://bj.bcebos.com/paddlehub/fastdeploy/picodet_l_320_coco_lcnet.tgz

There's a infer_cfg.yml inside the directory, it configures how to preprocess the image, and feed data to the model. From your error message, it means the infer_cfg.yml is not loaded.

Also, there's no need export Paddle to ONNX by yourself, FastDeploy support loading a Paddle model and inference with ONNX Runtime

jiangjiajun avatar Nov 08 '22 13:11 jiangjiajun

Thanks @jiangjiajun I forgot to say I am working on raspberry pi. I only add option.UseOrtBackend(); into infer_ppyolo.cc and compile. After that when I use I get the below error.

[INFO] fastdeploy/vision/common/processors/transform.cc(45)::FuseNormalizeCast	Normalize and Cast are fused to Normalize in preprocessing pipeline.
[INFO] fastdeploy/vision/common/processors/transform.cc(93)::FuseNormalizeHWC2CHW	Normalize and HWC2CHW are fused to NormalizeAndPermute  in preprocessing pipeline.
[INFO] fastdeploy/vision/common/processors/transform.cc(159)::FuseNormalizeColorConvert	BGR2RGB and NormalizeAndPermute are fused to NormalizeAndPermute with swap_rb=1
[WARNING] fastdeploy/fastdeploy_model.cc(86)::InitRuntime	PaddleDetection/PPYOLO is not supported with backend Backend::ORT.
[WARNING] fastdeploy/fastdeploy_model.cc(96)::InitRuntime	FastDeploy will choose Backend::OPENVINO for model inference.
[ERROR] fastdeploy/fastdeploy_model.cc(146)::CreateCpuBackend	Found no valid backend for model: PaddleDetection/PPYOLO
[ERROR] fastdeploy/vision/detection/ppdet/ppyolo.cc(43)::Initialize	Failed to initialize fastdeploy backend.
Failed to initialize.

I think there is no support ORT backend for ppyolo model in CPU (raspberry pi) . Is it correct?

saeedkhanehgir avatar Nov 09 '22 06:11 saeedkhanehgir

I didn't test ppyolo-tiny before, you could try to modify the code https://github.com/PaddlePaddle/FastDeploy/blob/develop/fastdeploy/vision/detection/ppdet/ppyolo.cc#L26

Add Backend::ORT in valid_cpu_backends and rebuild FastDeploy again

jiangjiajun avatar Nov 09 '22 07:11 jiangjiajun

Also if you are using in rasperry, Backend::Lite is available, try to build FastDeploy with -DENABLE_LITE_BACKEND=ON

jiangjiajun avatar Nov 09 '22 07:11 jiangjiajun

Thanks @jiangjiajun for ppyolo-tiny model, I added Backend::ORT to valid_cpu_backends and rebuilt it again and it worked. Excuse me, I have another question. for raspberry aarch64 Is the lite backend better than the ort backend?

saeedkhanehgir avatar Nov 22 '22 13:11 saeedkhanehgir

Yes, also Lite backend supports half precision(fp16) and int8 quantization model in arm.

jiangjiajun avatar Nov 22 '22 13:11 jiangjiajun

此ISSUE由于一年未更新,将会关闭处理,如有需要,可再次更新打开。

jiangjiajun avatar Feb 06 '24 04:02 jiangjiajun