PaddleX icon indicating copy to clipboard operation
PaddleX copied to clipboard

昇腾910B部署高性能框架

Open wangwenqi567 opened this issue 4 months ago • 6 comments

描述问题

arm架构的昇腾910B https://github.com/PaddlePaddle/PaddleOCR/blob/main/docs/version3.x/other_devices_support/paddlepaddle_install_NPU.md

环境

  1. 请提供您使用的PaddlePaddle、PaddleX版本号、Python版本号 PaddleX3.1.0 python3.10

请教问题:

目前昇腾只看到个别模型支持高性能,如果其他模型想要支持高性能,是否需要手动转成onnx格式,修改后端加载模型方式才可以?

wangwenqi567 avatar Sep 16 '25 11:09 wangwenqi567

使用昇腾 910B4 部署文本识别模型之后 我使用普通的文本识别模型进行识别 原图

Image

结果 Image

代码如下:

model_dir ="PP-OCRv4_server_rec_infer"
model_name = "PP-OCRv4_server_rec"

USE_HPIP = False
hpi_config = {
    "auto_config": False,   
    "backend": "om", 
}
try:
    recognizer = create_model(model_name=model_name, model_dir=model_dir, device="npu:0", use_hpip=False)
except Exception as e:
    logger.error(f"Failed to load the model: {e}")
    raise

@app.post("/pp-ocrv5_server_rec")
async def pp-ocrv5_server_rec(
    image_file,
) -> Any:
    try:
        image_data = await image_file.read()
        with Image.open(io.BytesIO(image_data)) as image:
            img = image.convert("RGB")
    except Exception as e:
        raise HTTPException(status_code=400, detail=f"Error opening image: {str(e)}")

    np_image = np.array(img)
    rec_results = recognizer.predict(np_image)
    ocr_result = {}
    for result in rec_results:
        ocr_result["rec_text"] = str(result["rec_text"]).replace(" ", "")
        ocr_result["rec_score"] = float(result["rec_score"])

    return ocr_result

if __name__ == "__main__":
    import uvicorn

    uvicorn.run(app, host="0.0.0.0", port=1001)

我用高性能的文本识别模型进行识别,使用什么图片得到的结果都是

Image

代码和上面唯一的区别就是

hpi_config = {
    "auto_config": False, 
    "backend": "om",
}
recognizer = create_model(model_name=model_name, model_dir=model_dir, device="npu:0", use_hpip=True, hpi_config=hpi_config, input_shape=[3, 48, 320])
启动模型成功

python text_model_server.py
No model hoster is available! Please check your network connection to one of the following model hosts:
HuggingFace (https://huggingface.co),
ModelScope (https://modelscope.cn),
AIStudio (https://aistudio.baidu.com), or
BOS (https://paddle-model-ecology.bj.bcebos.com).
Otherwise, only local models can be used.
 - INFO - text ocr model_dir: Pdf2Text_models/PP-OCRv5_server_rec_infer, model_name: PP-OCRv5_server_rec
 init.cc:238] ENV [CUSTOM_DEVICE_ROOT]=/usr/local/lib/python3.10/dist-packages/paddle_custom_device
 init.cc:146] Try loading custom device libs from: [/usr/local/lib/python3.10/dist-packages/paddle_custom_device]
 custom_device_load.cc:51] Succeed in loading custom runtime in lib: /usr/local/lib/python3.10/dist-packages/paddle_custom_device/libpaddle-custom-npu.so
 custom_device_load.cc:58] Skipped lib [/usr/local/lib/python3.10/dist-packages/paddle_custom_device/libpaddle-custom-npu.so]: no custom engine Plugin symbol in this lib.
custom_kernel.cc:68] Succeed in loading 359 custom kernel(s) from loaded lib(s), will be used like native ones.
I0916 20:41:33.723757  9559 init.cc:158] Finished in LoadCustomDevice with libs_path: [/usr/local/lib/python3.10/dist-packages/paddle_custom_device]
 CustomDevice: npu, visible devices count: 1
[    INFO] text_model_server.py:52 
INFO:     Started server process [9559]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:10082 (Press CTRL+C to quit)
INFO:     0.0.0.0:60852 - "POST /v1/pp-ocrv5_server_rec HTTP/1.1" 200 OK

想要请教一下是什么原因

wangwenqi567 avatar Sep 16 '25 12:09 wangwenqi567

NPU高性能推理目前不支持所有模型,未列出的模型可以尝试手动转onnx,但无法保证跑通且精度正常。

文本识别模型接受的并不是一幅完整的文档图像,而通常只是图像中的一个文本块。如果想要处理完整的输入图像,建议使用OCR产线。

Bobholamovic avatar Sep 17 '25 02:09 Bobholamovic

好的 使用"PP-OCRv4_server_det_infer_om_910B"模型高性能推理,报错 ultra_infer/runtime/backends/om/om_backend.cc(265)::Execute execute model failed, modelId is 1, errorCode is 507011 ultra_infer/runtime/backends/om/om_backend.cc(131)::Infer execute inference failed 这个模型是支持om后端高性能模式,可以看下错误原因吗

wangwenqi567 avatar Sep 17 '25 06:09 wangwenqi567

方便贴一下运行代码吗

a31413510 avatar Sep 18 '25 07:09 a31413510

好的

wangwenqi567 avatar Sep 22 '25 02:09 wangwenqi567

model_dir = "PP-OCRv4_server_det_infer_om_910B"
model_name = "PP-OCRv4_server_det"

USE_HPIP = False
hpi_config = {
    "auto_config": False,  
    "backend": "om",
}
try:
    model= create_model(model_name=model_name, model_dir=model_dir, device="npu:0", use_hpip=USE_HPIP)
    logger.info("TEXT OCR Model loaded successfully.")
except Exception as e:
    logger.error(f"Failed to load the model: {e}")
    raise

@app.post("/PP-OCRv4_server_det")
async def pp_ocr(
    image_file: Annotated[UploadFile, File(...)],
) -> Any:
    try:
        image_data = await image_file.read()
        with Image.open(io.BytesIO(image_data)) as image:
            img = image.convert("RGB")
    except Exception as e:
        raise HTTPException(status_code=400, detail=f"Error opening image: {str(e)}")

    np_image = np.array(img)
    rec_results = model.predict(np_image)

    ocr_result = {}
    for result in rec_results:
        ocr_result["rec_text"] = str(result["rec_text"]).replace(" ", "")
        ocr_result["rec_score"] = float(result["rec_score"])

    return ocr_result


if __name__ == "__main__":
    import uvicorn

    uvicorn.run(app, host="0.0.0.0", port=10082)

使用PP-OCRv4_server_det_infer_om_910B模型启动服务,调用服务就卡死了 ASCEND_RT_VISIBLE_DEVICES=0 python text_det_model.py

wangwenqi567 avatar Sep 22 '25 06:09 wangwenqi567