PaddleOCR icon indicating copy to clipboard operation
PaddleOCR copied to clipboard

使用paddleocr识别,使用ppocrv4模型时开启enable_mkldnn=False加速失效

Open qq504692520 opened this issue 1 year ago • 6 comments

请提供下述完整信息以便快速定位问题/Please provide the following information to quickly locate the problem

  • 系统环境/System Environment:windows10
  • 版本号/Version:Paddle:paddlepaddle2.6 PaddleOCR:2.7 问题相关组件/Related components: ocr = PaddleOCR(use_angle_cls=False, lang="ch", use_mkldnn=True) # need to run only once to download and load model into memory img_path = './imgs/11.jpg' result = ocr.ocr(img_path, cls=False) 加速没有效果,但是使用ppocrv3时,加入有效

qq504692520 avatar Feb 19 '24 09:02 qq504692520

预测的环境有更换吗?v4推荐使用openvino加速,会大幅提升

tink2123 avatar Feb 19 '24 11:02 tink2123

@tink2123 我按照你的建议使用了openvino加速,但是我使用的是FastDeploy项目中部署的方式,发现效果确实有提升速度在2秒左右,但是听paddle群里的小伙伴说着,他们用vino部署的项目识别速度在120毫秒,差距还是不小,他用的c++和openvino加速,我用的python,这是语言造成的差距,还是说我的方式还是有问题

qq504692520 avatar Feb 20 '24 07:02 qq504692520

预测的环境有更换吗?v4推荐使用openvino加速,会大幅提升

@tink2123 有具体链接或者库吗,搜索“openvino PaddleOCR 加速”没找到具体的做法,希望大佬可以解答一下

mrchengshunlong avatar Feb 22 '24 07:02 mrchengshunlong

@mrchengshunlong 教程就在paddleocr的项目里 111

qq504692520 avatar Feb 22 '24 07:02 qq504692520

Has the predicted environment changed? It is recommended to use openvino acceleration for v4, which will greatly improve

@tink2123 Can you provide a sample code for using openvino with paddle and predicting it on image.

ShubhamZoop avatar Feb 26 '24 19:02 ShubhamZoop

@ShubhamZoop I'm use python do it, you can refer https://github.com/PaddlePaddle/PaddleOCR/tree/dygraph/deploy/fastdeploy/cpu-gpu/python to deploy and down model, then use "python infer.py --det_model ch_PP-OCRv3_det_infer --cls_model ch_ppocr_mobile_v2.0_cls_infer --rec_model ch_PP-OCRv3_rec_infer --rec_label_file ppocr_keys_v1.txt --image 12.jpg --device cpu --backend openvino", however, i'm better use it in python instead of cmd, so I rewrite infer.py, you can refer it.

#coding:utf-8
import fastdeploy as fd
import pandas as pd
import os
import time
import shutil
import cv2
import tbpu
import numpy as np

def parse_arguments():
    import argparse
    import ast
    parser = argparse.ArgumentParser()
    parser.add_argument(
        "--det_model", default="ch_PP-OCRv3_det_infer", help="Path of Detection model of PPOCR.")
    parser.add_argument(
        "--cls_model", default="ch_ppocr_mobile_v2.0_cls_infer", help="Path of Classification model of PPOCR.")
    parser.add_argument(
        "--rec_model", default="ch_PP-OCRv3_rec_infer", help="Path of Recognization model of PPOCR.")
    parser.add_argument(
        "--rec_label_file", default="ppocr_keys_v1.txt", help="Path of Recognization model of PPOCR.")
    parser.add_argument(
        "--image", default="infer_picture.jpg", type=str, help="Path of test image file.")
    parser.add_argument(
        "--device", default="cpu", type=str, help="Type of inference device, support 'cpu' or 'gpu'.")
    parser.add_argument(
        "--device_id", default=0, type=int, help="Define which GPU card used to run model.")
    parser.add_argument(
        "--cls_bs", default=1, type=int, help="Classification model inference batch size.")
    parser.add_argument(
        "--rec_bs", default=6, type=int, help="Recognition model inference batch size")
    parser.add_argument(
        "--backend", default="openvino", type=str, help="Type of inference backend, support ort/trt/paddle/openvino, default 'openvino' for cpu, 'tensorrt' for gpu"
    )
    parser.add_argument(
        "--backend", default="openvino", type=str,
        help="Type of inference backend, support ort/trt/paddle/openvino, default 'openvino' for cpu, 'tensorrt' for gpu"
    )

    return parser.parse_args()

def build_option(args):

    det_option = fd.RuntimeOption()
    cls_option = fd.RuntimeOption()
    rec_option = fd.RuntimeOption()

    if args.device.lower() == "gpu":
        det_option.use_gpu(args.device_id)
        cls_option.use_gpu(args.device_id)
        rec_option.use_gpu(args.device_id)

    if args.backend.lower() == "trt":
        assert args.device.lower(
        ) == "gpu", "TensorRT backend require inference on device GPU."
        det_option.use_trt_backend()
        cls_option.use_trt_backend()
        rec_option.use_trt_backend()

        # If use TRT backend, the dynamic shape will be set as follow.
        # We recommend that users set the length and height of the detection model to a multiple of 32.
        # We also recommend that users set the Trt input shape as follow.
        det_option.trt_option.set_shape("x", [1, 3, 64, 64], [1, 3, 640, 640],
                                       [1, 3, 960, 960])
        cls_option.trt_option.set_shape("x", [1, 3, 48, 10],
                                       [args.cls_bs, 3, 48, 320],
                                       [args.cls_bs, 3, 48, 1024])
        rec_option.trt_option.set_shape("x", [1, 3, 48, 10],
                                       [args.rec_bs, 3, 48, 320],
                                       [args.rec_bs, 3, 48, 2304])

        # Users could save TRT cache file to disk as follow.
        det_option.set_trt_cache_file(args.det_model + "/det_trt_cache.trt")
        cls_option.set_trt_cache_file(args.cls_model + "/cls_trt_cache.trt")
        rec_option.set_trt_cache_file(args.rec_model + "/rec_trt_cache.trt")

    elif args.backend.lower() == "pptrt":
        assert args.device.lower(
        ) == "gpu", "Paddle-TensorRT backend require inference on device GPU."
        det_option.use_paddle_infer_backend()
        det_option.paddle_infer_option.collect_trt_shape = True
        det_option.paddle_infer_option.enable_trt = True

        cls_option.use_paddle_infer_backend()
        cls_option.paddle_infer_option.collect_trt_shape = True
        cls_option.paddle_infer_option.enable_trt = True

        rec_option.use_paddle_infer_backend()
        rec_option.paddle_infer_option.collect_trt_shape = True
        rec_option.paddle_infer_option.enable_trt = True

        # If use TRT backend, the dynamic shape will be set as follow.
        # We recommend that users set the length and height of the detection model to a multiple of 32.
        # We also recommend that users set the Trt input shape as follow.
        det_option.set_trt_input_shape("x", [1, 3, 64, 64], [1, 3, 640, 640],
                                       [1, 3, 960, 960])
        cls_option.set_trt_input_shape("x", [1, 3, 48, 10],
                                       [args.cls_bs, 3, 48, 320],
                                       [args.cls_bs, 3, 48, 1024])
        rec_option.set_trt_input_shape("x", [1, 3, 48, 10],
                                       [args.rec_bs, 3, 48, 320],
                                       [args.rec_bs, 3, 48, 2304])

        # Users could save TRT cache file to disk as follow.
        det_option.set_trt_cache_file(args.det_model)
        cls_option.set_trt_cache_file(args.cls_model)
        rec_option.set_trt_cache_file(args.rec_model)

    elif args.backend.lower() == "ort":
        det_option.use_ort_backend()
        cls_option.use_ort_backend()
        rec_option.use_ort_backend()

    elif args.backend.lower() == "paddle":
        det_option.use_paddle_infer_backend()
        cls_option.use_paddle_infer_backend()
        rec_option.use_paddle_infer_backend()

    elif args.backend.lower() == "openvino":
        assert args.device.lower(
        ) == "cpu", "OpenVINO backend require inference on device CPU."
        det_option.use_openvino_backend()
        cls_option.use_openvino_backend()
        rec_option.use_openvino_backend()

    elif args.backend.lower() == "pplite":
        assert args.device.lower(
        ) == "cpu", "Paddle Lite backend require inference on device CPU."
        det_option.use_lite_backend()
        cls_option.use_lite_backend()
        rec_option.use_lite_backend()

    return det_option, cls_option, rec_option

args = parse_arguments()
args.device = "cpu"
args.backend = "openvino"

det_model_file = os.path.join(args.det_model, "inference.pdmodel")
det_params_file = os.path.join(args.det_model, "inference.pdiparams")

cls_model_file = os.path.join(args.cls_model, "inference.pdmodel")
cls_params_file = os.path.join(args.cls_model, "inference.pdiparams")

rec_model_file = os.path.join(args.rec_model, "inference.pdmodel")
rec_params_file = os.path.join(args.rec_model, "inference.pdiparams")
rec_label_file = args.rec_label_file

det_option, cls_option, rec_option = build_option(args)

det_model = fd.vision.ocr.DBDetector(
    det_model_file, det_params_file, runtime_option=det_option)

cls_model = fd.vision.ocr.Classifier(
    cls_model_file, cls_params_file, runtime_option=cls_option)

rec_model = fd.vision.ocr.Recognizer(
    rec_model_file, rec_params_file, rec_label_file, runtime_option=rec_option)

# Parameters settings for pre and post processing of Det/Cls/Rec Models.
# All parameters are set to default values.
det_model.preprocessor.max_side_len = 960
det_model.postprocessor.det_db_thresh = 0.3
det_model.postprocessor.det_db_box_thresh = 0.6
det_model.postprocessor.det_db_unclip_ratio = 1.5
det_model.postprocessor.det_db_score_mode = "fast"
det_model.postprocessor.use_dilation = False
cls_model.postprocessor.cls_thresh = 0.9

# Create PP-OCRv3, if cls_model is not needed, just set cls_model=None .
ppocr_v3 = fd.vision.ocr.PPOCRv3(
    det_model=det_model, cls_model=cls_model, rec_model=rec_model)

# # Set inference batch size for cls model and rec model, the value could be -1 and 1 to positive infinity.
# # When inference batch size is set to -1, it means that the inference batch size
# # of the cls and rec models will be the same as the number of boxes detected by the det model.
ppocr_v3.cls_batch_size = args.cls_bs
ppocr_v3.rec_batch_size = args.rec_bs

file_path=""
# 使用解码后的路径来读取图像文件
im = cv2.imdecode(np.fromfile(file_path, dtype=np.uint8), -1)
getObj = ppocr_v3.predict(im)

result_list = []
for i in range(len(getObj.boxes)):
    det_boxes = [getObj.boxes[i][j:j + 2] for j in range(0, len(getObj.boxes[i]), 2)]
    result_dict = {
        'box': det_boxes,
        'score': getObj.rec_scores[i],
        'text': getObj.text[i]
    }
    result_list.append(result_dict)

mrchengshunlong avatar Feb 27 '24 07:02 mrchengshunlong