使用cpu版本高性能模式报错
🔎 Search before asking
- [x] I have searched the PaddleOCR Docs and found no similar bug report.
- [x] I have searched the PaddleOCR Issues and found no similar bug report.
- [x] I have searched the PaddleOCR Discussions and found no similar bug report.
🐛 Bug (问题描述)
在https://github.com/PaddlePaddle/PaddleOCR/issues/15465 中提到,cpu版本转换为onnx模型需要将onnx版本降级至1.16.0,将onnxruntime版本降级至1.20.1。然而,当启用表格识别时,转换并使用表格识别相关onnx模型时报出错误:RuntimeError: Could not find an implementation for Where(16) node with name 'Where.12'。该错误在调整onnx和onnxruntime版本前曾经出现并通过降低版本解决,当时并未开启表格识别。现在使用调整后的版本并开启表格识别时再次报出该错误,希望提供目前具体可用的onnx和onnxruntime版本号码,并尽快修复该问题,谢谢。此外还想问一下,CUDA12是否仍未支持高性能模型,大概什么时候可以支持?CUDA12在当前版本下可以使用ONNX模型吗?
具体报错为:
Creating model: ('SLANeXt_wired', None)
Using official model (SLANeXt_wired), the model files will be automatically downloaded and saved in /home/chenlm/.paddlex/official_models.
Automatically converting PaddlePaddle model to ONNX format
Inference backend: onnxruntime
Inference backend config: cpu_num_threads=8
2025-06-04 15:48:24.205540972 [W:onnxruntime:, graph.cc:109 MergeShapeInfo] Error merging shape info for output. 'p2o.sub_block.pd_op.assign.0.0' source:{1} target:{}. Falling back to lenient merge.
2025-06-04 15:48:24.209531483 [W:onnxruntime:, graph.cc:4285 CleanUnusedInitializersAndNodeArgs] Removing initializer 'p2o.pd_op.full_int_array.225.0'. It is not used by any node and should be removed from the model.
2025-06-04 15:48:24.209581412 [W:onnxruntime:, graph.cc:4285 CleanUnusedInitializersAndNodeArgs] Removing initializer 'p2o.pd_op.full_int_array.230.0'. It is not used by any node and should be removed from the model.
2025-06-04 15:48:24.209600736 [W:onnxruntime:, graph.cc:4285 CleanUnusedInitializersAndNodeArgs] Removing initializer 'p2o.pd_op.full.525.0'. It is not used by any node and should be removed from the model.
2025-06-04 15:48:24.209687484 [W:onnxruntime:, graph.cc:4285 CleanUnusedInitializersAndNodeArgs] Removing initializer 'p2o.pd_op.full.57.0'. It is not used by any node and should be removed from the model.
2025-06-04 15:48:24.209731477 [W:onnxruntime:, graph.cc:4285 CleanUnusedInitializersAndNodeArgs] Removing initializer 'p2o.pd_op.full.156.0'. It is not used by any node and should be removed from the model.
2025-06-04 15:48:24.209794076 [W:onnxruntime:, graph.cc:4285 CleanUnusedInitializersAndNodeArgs] Removing initializer 'p2o.pd_op.full_int_array.226.0'. It is not used by any node and should be removed from the model.
2025-06-04 15:48:24.209813372 [W:onnxruntime:, graph.cc:4285 CleanUnusedInitializersAndNodeArgs] Removing initializer 'p2o.pd_op.full.0.0'. It is not used by any node and should be removed from the model.
2025-06-04 15:48:24.209835055 [W:onnxruntime:, graph.cc:4285 CleanUnusedInitializersAndNodeArgs] Removing initializer 'p2o.pd_op.full.468.0'. It is not used by any node and should be removed from the model.
2025-06-04 15:48:24.209872749 [W:onnxruntime:, graph.cc:4285 CleanUnusedInitializersAndNodeArgs] Removing initializer 'p2o.pd_op.full.213.0'. It is not used by any node and should be removed from the model.
2025-06-04 15:48:24.209955427 [W:onnxruntime:, graph.cc:4285 CleanUnusedInitializersAndNodeArgs] Removing initializer 'p2o.pd_op.full.312.0'. It is not used by any node and should be removed from the model.
2025-06-04 15:48:24.209983655 [W:onnxruntime:, graph.cc:4285 CleanUnusedInitializersAndNodeArgs] Removing initializer 'p2o.pd_op.full_int_array.229.0'. It is not used by any node and should be removed from the model.
2025-06-04 15:48:24.210010131 [W:onnxruntime:, graph.cc:4285 CleanUnusedInitializersAndNodeArgs] Removing initializer 'p2o.pd_op.full.369.0'. It is not used by any node and should be removed from the model.
2025-06-04 15:48:24.210892119 [W:onnxruntime:, graph.cc:4285 CleanUnusedInitializersAndNodeArgs] Removing initializer 'p2o.sub_block.pd_op.full.3.0'. It is not used by any node and should be removed from the model.
2025-06-04 15:48:24.210919816 [W:onnxruntime:, graph.cc:4285 CleanUnusedInitializersAndNodeArgs] Removing initializer 'p2o.sub_block.pd_op.full.4.0'. It is not used by any node and should be removed from the model.
2025-06-04 15:48:24.210939703 [W:onnxruntime:, graph.cc:4285 CleanUnusedInitializersAndNodeArgs] Removing initializer 'p2o.sub_block.pd_op.full.1.0'. It is not used by any node and should be removed from the model.
2025-06-04 15:48:24.210952016 [W:onnxruntime:, graph.cc:4285 CleanUnusedInitializersAndNodeArgs] Removing initializer 'p2o.sub_block.pd_op.full.2.0'. It is not used by any node and should be removed from the model.
Traceback (most recent call last):
File "/home/chenlm/srdkb-loader/loaders/pdf_loader/ppsV3_service.py", line 111, in
🏃♂️ Environment (运行环境)
Ubuntu 22.04.5 LTS X86 python 3.11 Intel Xeon
🌰 Minimal Reproducible Example (最小可复现问题的Demo)
import time from paddleocr import PPStructureV3
pipeline = PPStructureV3(use_doc_orientation_classify=False, use_doc_unwarping=False, use_seal_recognition=False, use_table_recognition=True, use_formula_recognition=False, use_chart_recognition=True, use_region_detection=False, device="cpu", text_detection_model_name="PP-OCRv5_mobile_det", text_recognition_model_name="PP-OCRv4_mobile_rec", enable_hpi = True)
output = pipeline.predict("/home/chenlm/testFile/test.png")
for res in output: res.print()
请执行 pip show paddle2onnx 看看 Paddle2ONNX 的版本是多少~
CUDA 12预计会在下月支持
请执行
pip show paddle2onnx看看 Paddle2ONNX 的版本是多少~CUDA 12预计会在下月支持
2.0.2rc1
建议pip install paddle2onnx==2.0.2rc3试试 我本地测试没问题~
在重新执行程序前可能需要通过rm -rf ~/.paddlex/official_models/*/.cache清理缓存
建议
pip install paddle2onnx==2.0.2rc3试试 我本地测试没问题~
升级paddle2onnx==2.0.2rc3后运行不报故障,但是处理表格时报出段错误,并且在模型保存路径中发现表格识别相关模型并未转为onnx。昨天得知paddleocr已更新为3.0.1,且在https://github.com/PaddlePaddle/PaddleOCR/issues/15465 中您表示onnx问题将在3.0.1版本修复,因此我更新paddleocr版本后卸载onnx相关依赖并重新通过paddleocr install_hpi_deps cpu命令安装高性能插件依赖。我删除所有此前生成的onnx模型并重新运行代码,报出新的错误:RuntimeError: No inference backend and configuration could be suggested. Reason: 'PP-LCNet_x1_0_textline_ori' is not a known model。在模型保存目录中发现相比3.0.0版本新下载了PP-LCNet_x1_0_textline_ori模型。请教下一步应该如何处理,谢谢!
我的环境中相关依赖版本为: paddle2onnx 2.0.2rc3 paddleocr 3.0.1 paddlepaddle 3.0.0 paddlex 3.0.1 onnx 1.17.0 onnx_graphsurgeon 0.5.8 onnxoptimizer 0.3.13 onnxruntime 1.22.0
具体报错内容为:
Creating model: ('PP-DocLayout_plus-L', None)
Using official model (PP-DocLayout_plus-L), the model files will be automatically downloaded and saved in /home/chenlm/.paddlex/official_models.
/home/chenlm/.conda/envs/paddleocr30/lib/python3.11/site-packages/paddle/utils/cpp_extension/extension_utils.py:711: UserWarning: No ccache found. Please be aware that recompiling all source files may be required. You can download and install ccache from: https://github.com/ccache/ccache/blob/master/doc/INSTALL.md
warnings.warn(warning_message)
Automatically converting PaddlePaddle model to ONNX format
Inference backend: onnxruntime
Inference backend config: cpu_num_threads=8
[INFO] ultra_infer/runtime/runtime.cc(308)::CreateOrtBackend Runtime initialized with Backend::ORT in Device::CPU.
Creating model: ('PP-LCNet_x1_0_textline_ori', None)
Using official model (PP-LCNet_x1_0_textline_ori), the model files will be automatically downloaded and saved in /home/chenlm/.paddlex/official_models.
Traceback (most recent call last):
File "/home/chenlm/srdkb-loader/loaders/pdf_loader/ppsV3_service.py", line 115, in
当前使用代码为: PPStructureV3(use_doc_orientation_classify=False, use_doc_unwarping=False, use_seal_recognition=False, use_table_recognition=True, use_formula_recognition=False, use_chart_recognition=False, use_region_detection=False, device="cpu", text_detection_model_name="PP-OCRv5_mobile_det", text_recognition_model_name="PP-OCRv4_mobile_rec", enable_hpi=True )
paddleocr 3.0.1将依赖的paddle2onnx版本更新为2.0.2rc3,这个版本的paddle2onnx限制了依赖的onnx版本,进而解决了之前的IR Version问题。不过,paddleocr 3.0.1似乎引入了一个新bug,就是你遇到的这个RuntimeError,抱歉给你带来了不便,我们将很快修复,并在预计下周发布的3.0.2版本中体现。目前,建议设置textline_orientation_model_name="PP-LCNet_x0_25_textline_ori"来绕过这个问题。
关于那个推理时的段错误,请问方便贴一下错误截图吗?另外,这个问题是必定发生,还是偶然发生的呀?
paddleocr 3.0.1将依赖的paddle2onnx版本更新为2.0.2rc3,这个版本的paddle2onnx限制了依赖的onnx版本,进而解决了之前的
IR Version问题。不过,paddleocr 3.0.1似乎引入了一个新bug,就是你遇到的这个RuntimeError,抱歉给你带来了不便,我们将很快修复,并在预计下周发布的3.0.2版本中体现。目前,建议设置textline_orientation_model_name="PP-LCNet_x0_25_textline_ori"来绕过这个问题。关于那个推理时的段错误,请问方便贴一下错误截图吗?另外,这个问题是必定发生,还是偶然发生的呀?
在当时的版本下尝试了三次,都发生了段错误,目前已全部升级
依赖版本号为:
onnx 1.16.0
onnx_graphsurgeon 0.5.8
onnxoptimizer 0.3.13
onnxruntime 1.20.1
paddle2onnx 2.0.2rc3
paddleocr 3.0.0
paddlepaddle 3.0.0
paddlex 3.0.0
根据您的建议,设置textline_orientation_model_name="PP-LCNet_x0_25_textline_ori",目前最新版本(3.0.1)下运行报错:
RuntimeError: Could not find an implementation for Where(16) node with name 'Where.12'
依赖版本号为:
onnx 1.17.0
onnx_graphsurgeon 0.5.8
onnxoptimizer 0.3.13
onnxruntime 1.22.0
paddle2onnx 2.0.2rc3
paddleocr 3.0.1
paddlepaddle 3.0.0
paddlex 3.0.1
具体错误为:
2025-06-06 11:07:49.173987221 [W:onnxruntime:, graph.cc:109 MergeShapeInfo] Error merging shape info for output. 'p2o.sub_block.pd_op.assign.0.0' source:{1} target:{}. Falling back to lenient merge.
2025-06-06 11:07:49.177704825 [W:onnxruntime:, graph.cc:4285 CleanUnusedInitializersAndNodeArgs] Removing initializer 'p2o.pd_op.full_int_array.225.0'. It is not used by any node and should be removed from the model.
2025-06-06 11:07:49.177746870 [W:onnxruntime:, graph.cc:4285 CleanUnusedInitializersAndNodeArgs] Removing initializer 'p2o.pd_op.full_int_array.230.0'. It is not used by any node and should be removed from the model.
2025-06-06 11:07:49.177775373 [W:onnxruntime:, graph.cc:4285 CleanUnusedInitializersAndNodeArgs] Removing initializer 'p2o.pd_op.full.525.0'. It is not used by any node and should be removed from the model.
2025-06-06 11:07:49.177844192 [W:onnxruntime:, graph.cc:4285 CleanUnusedInitializersAndNodeArgs] Removing initializer 'p2o.pd_op.full.57.0'. It is not used by any node and should be removed from the model.
2025-06-06 11:07:49.177890821 [W:onnxruntime:, graph.cc:4285 CleanUnusedInitializersAndNodeArgs] Removing initializer 'p2o.pd_op.full.156.0'. It is not used by any node and should be removed from the model.
2025-06-06 11:07:49.177954331 [W:onnxruntime:, graph.cc:4285 CleanUnusedInitializersAndNodeArgs] Removing initializer 'p2o.pd_op.full_int_array.226.0'. It is not used by any node and should be removed from the model.
2025-06-06 11:07:49.177974486 [W:onnxruntime:, graph.cc:4285 CleanUnusedInitializersAndNodeArgs] Removing initializer 'p2o.pd_op.full.0.0'. It is not used by any node and should be removed from the model.
2025-06-06 11:07:49.178003689 [W:onnxruntime:, graph.cc:4285 CleanUnusedInitializersAndNodeArgs] Removing initializer 'p2o.pd_op.full.468.0'. It is not used by any node and should be removed from the model.
2025-06-06 11:07:49.178043762 [W:onnxruntime:, graph.cc:4285 CleanUnusedInitializersAndNodeArgs] Removing initializer 'p2o.pd_op.full.213.0'. It is not used by any node and should be removed from the model.
2025-06-06 11:07:49.178125369 [W:onnxruntime:, graph.cc:4285 CleanUnusedInitializersAndNodeArgs] Removing initializer 'p2o.pd_op.full.312.0'. It is not used by any node and should be removed from the model.
2025-06-06 11:07:49.178156253 [W:onnxruntime:, graph.cc:4285 CleanUnusedInitializersAndNodeArgs] Removing initializer 'p2o.pd_op.full_int_array.229.0'. It is not used by any node and should be removed from the model.
2025-06-06 11:07:49.178180727 [W:onnxruntime:, graph.cc:4285 CleanUnusedInitializersAndNodeArgs] Removing initializer 'p2o.pd_op.full.369.0'. It is not used by any node and should be removed from the model.
2025-06-06 11:07:49.179065817 [W:onnxruntime:, graph.cc:4285 CleanUnusedInitializersAndNodeArgs] Removing initializer 'p2o.sub_block.pd_op.full.3.0'. It is not used by any node and should be removed from the model.
2025-06-06 11:07:49.179092613 [W:onnxruntime:, graph.cc:4285 CleanUnusedInitializersAndNodeArgs] Removing initializer 'p2o.sub_block.pd_op.full.4.0'. It is not used by any node and should be removed from the model.
2025-06-06 11:07:49.179099173 [W:onnxruntime:, graph.cc:4285 CleanUnusedInitializersAndNodeArgs] Removing initializer 'p2o.sub_block.pd_op.full.1.0'. It is not used by any node and should be removed from the model.
2025-06-06 11:07:49.179118197 [W:onnxruntime:, graph.cc:4285 CleanUnusedInitializersAndNodeArgs] Removing initializer 'p2o.sub_block.pd_op.full.2.0'. It is not used by any node and should be removed from the model.
Traceback (most recent call last):
File "/home/chenlm/srdkb-loader/loaders/pdf_loader/ppsV3_service.py", line 117, in
请问方便提供一下导致段错误的图片吗?我这边本地似乎没法复现……以及,想确认一下,出现段错误的是否还是和之前一样的代码,例如设置text_detection_model_name="PP-OCRv5_mobile_det"?
另外关于onnx模型报错的问题:
在重新执行程序前可能需要通过
rm -rf ~/.paddlex/official_models/*/.cache清理缓存
我这里说得不够确切,这样做只会删除缓存,不会把此前生成的onnx文件也删掉。可能还需要执行一下rm -rf ~/.paddlex/official_models/*/*.onnx~
之后可以尝试重新生成模型。我在一致的环境中测试了一张表格图像,可以正常得到结果~
请问方便提供一下导致段错误的图片吗?我这边本地似乎没法复现……以及,想确认一下,出现段错误的是否还是和之前一样的代码,例如设置
text_detection_model_name="PP-OCRv5_mobile_det"?另外关于onnx模型报错的问题:
在重新执行程序前可能需要通过
rm -rf ~/.paddlex/official_models/*/.cache清理缓存我这里说得不够确切,这样做只会删除缓存,不会把此前生成的onnx文件也删掉。可能还需要执行一下
rm -rf ~/.paddlex/official_models/*/*.onnx~之后可以尝试重新生成模型。我在一致的环境中测试了一张表格图像,可以正常得到结果~
我的python版本为3.11
对pps的配置代码一致:
self.ocr_engine = PPStructureV3(use_doc_orientation_classify=False,
use_doc_unwarping=False,
use_seal_recognition=False,
use_table_recognition=True,
use_formula_recognition=False,
use_chart_recognition=False,
use_region_detection=False,
device="cpu",
text_detection_model_name="PP-OCRv5_mobile_det",
text_recognition_model_name="PP-OCRv4_mobile_rec",
textline_orientation_model_name="PP-LCNet_x0_25_textline_ori",
enable_hpi=True
)
由于3.0.0版本输入pdf处理会报错,因此我转换成png进行输入。而在3.0.1版本中我直接使用pdf文件作为输入
3.0.0版本段错误发生时,ocr服务可以成功运行并处理非表格图片,是遇到表格图片时发生错误,全部输出如下
Using official model (PP-LCNet_x0_25_textline_ori), the model files will be automatically downloaded and saved in /home/chenlm/.paddlex/official_models.
Inference backend: openvino
Inference backend config: cpu_num_threads=8
[INFO] ultra_infer/runtime/backends/openvino/ov_backend.cc(371)::InitFromOnnx number of streams:1.
[INFO] ultra_infer/runtime/backends/openvino/ov_backend.cc(375)::InitFromOnnx affinity:YES.
[INFO] ultra_infer/runtime/backends/openvino/ov_backend.cc(387)::InitFromOnnx Compile OpenVINO model on device_name:CPU.
[INFO] ultra_infer/runtime/runtime.cc(283)::CreateOpenVINOBackend Runtime initialized with Backend::OPENVINO in Device::CPU.
Creating model: ('PP-OCRv5_mobile_det', None)
Using official model (PP-OCRv5_mobile_det), the model files will be automatically downloaded and saved in /home/chenlm/.paddlex/official_models.
Inference backend: openvino
Inference backend config: cpu_num_threads=8
[INFO] ultra_infer/runtime/backends/openvino/ov_backend.cc(371)::InitFromOnnx number of streams:1.
[INFO] ultra_infer/runtime/backends/openvino/ov_backend.cc(375)::InitFromOnnx affinity:YES.
[INFO] ultra_infer/runtime/backends/openvino/ov_backend.cc(387)::InitFromOnnx Compile OpenVINO model on device_name:CPU.
[INFO] ultra_infer/runtime/runtime.cc(283)::CreateOpenVINOBackend Runtime initialized with Backend::OPENVINO in Device::CPU.
Creating model: ('PP-OCRv4_mobile_rec', None)
Using official model (PP-OCRv4_mobile_rec), the model files will be automatically downloaded and saved in /home/chenlm/.paddlex/official_models.
Inference backend: openvino
Inference backend config: cpu_num_threads=8
[INFO] ultra_infer/runtime/backends/openvino/ov_backend.cc(371)::InitFromOnnx number of streams:1.
[INFO] ultra_infer/runtime/backends/openvino/ov_backend.cc(375)::InitFromOnnx affinity:YES.
[INFO] ultra_infer/runtime/backends/openvino/ov_backend.cc(387)::InitFromOnnx Compile OpenVINO model on device_name:CPU.
[INFO] ultra_infer/runtime/runtime.cc(283)::CreateOpenVINOBackend Runtime initialized with Backend::OPENVINO in Device::CPU.
Creating model: ('PP-LCNet_x1_0_table_cls', None)
Using official model (PP-LCNet_x1_0_table_cls), the model files will be automatically downloaded and saved in /home/chenlm/.paddlex/official_models.
Inference backend: openvino
Inference backend config: cpu_num_threads=8
[INFO] ultra_infer/runtime/backends/openvino/ov_backend.cc(371)::InitFromOnnx number of streams:1.
[INFO] ultra_infer/runtime/backends/openvino/ov_backend.cc(375)::InitFromOnnx affinity:YES.
[INFO] ultra_infer/runtime/backends/openvino/ov_backend.cc(387)::InitFromOnnx Compile OpenVINO model on device_name:CPU.
[INFO] ultra_infer/runtime/runtime.cc(283)::CreateOpenVINOBackend Runtime initialized with Backend::OPENVINO in Device::CPU.
Creating model: ('SLANeXt_wired', None)
Using official model (SLANeXt_wired), the model files will be automatically downloaded and saved in /home/chenlm/.paddlex/official_models.
The Paddle Inference backend is selected with the default configuration. This may not provide optimal performance.
Using Paddle Inference backend
Paddle predictor option: device_type: cpu, device_id: None, trt_dynamic_shapes: {'x': [[1, 3, 32, 32], [1, 3, 64, 448], [8, 3, 488, 488]]}, run_mode: paddle, cpu_threads: 8, delete_pass: [], enable_new_ir: True, enable_cinn: False, trt_cfg_setting: {}, trt_use_dynamic_shapes: True, trt_collect_shape_range_info: True, trt_discard_cached_shape_range_info: False, trt_dynamic_shape_input_data: None, trt_shape_range_info_path: None, trt_allow_rebuild_at_runtime: True
Creating model: ('SLANet_plus', None)
Using official model (SLANet_plus), the model files will be automatically downloaded and saved in /home/chenlm/.paddlex/official_models.
The Paddle Inference backend is selected with the default configuration. This may not provide optimal performance.
Using Paddle Inference backend
Paddle predictor option: device_type: cpu, device_id: None, trt_dynamic_shapes: {'x': [[1, 3, 32, 32], [1, 3, 64, 448], [8, 3, 488, 488]]}, run_mode: paddle, cpu_threads: 8, delete_pass: [], enable_new_ir: True, enable_cinn: False, trt_cfg_setting: {}, trt_use_dynamic_shapes: True, trt_collect_shape_range_info: True, trt_discard_cached_shape_range_info: False, trt_dynamic_shape_input_data: None, trt_shape_range_info_path: None, trt_allow_rebuild_at_runtime: True
Creating model: ('RT-DETR-L_wired_table_cell_det', None)
Using official model (RT-DETR-L_wired_table_cell_det), the model files will be automatically downloaded and saved in /home/chenlm/.paddlex/official_models.
The Paddle Inference backend is selected with the default configuration. This may not provide optimal performance.
Using Paddle Inference backend
Paddle predictor option: device_type: cpu, device_id: None, trt_dynamic_shapes: {'im_shape': [[1, 2], [1, 2], [8, 2]], 'image': [[1, 3, 640, 640], [1, 3, 640, 640], [8, 3, 640, 640]], 'scale_factor': [[1, 2], [1, 2], [8, 2]]}, trt_dynamic_shape_input_data: {'im_shape': [[640.0, 640.0], [640.0, 640.0], [640.0, 640.0, 640.0, 640.0, 640.0, 640.0, 640.0, 640.0, 640.0, 640.0, 640.0, 640.0, 640.0, 640.0, 640.0, 640.0]], 'scale_factor': [[2.0, 2.0], [1.0, 1.0], [0.67, 0.67, 0.67, 0.67, 0.67, 0.67, 0.67, 0.67, 0.67, 0.67, 0.67, 0.67, 0.67, 0.67, 0.67, 0.67]]}, run_mode: paddle, cpu_threads: 8, delete_pass: [], enable_new_ir: True, enable_cinn: False, trt_cfg_setting: {}, trt_use_dynamic_shapes: True, trt_collect_shape_range_info: True, trt_discard_cached_shape_range_info: False, trt_shape_range_info_path: None, trt_allow_rebuild_at_runtime: True
Creating model: ('RT-DETR-L_wireless_table_cell_det', None)
Using official model (RT-DETR-L_wireless_table_cell_det), the model files will be automatically downloaded and saved in /home/chenlm/.paddlex/official_models.
The Paddle Inference backend is selected with the default configuration. This may not provide optimal performance.
Using Paddle Inference backend
Paddle predictor option: device_type: cpu, device_id: None, trt_dynamic_shapes: {'im_shape': [[1, 2], [1, 2], [8, 2]], 'image': [[1, 3, 640, 640], [1, 3, 640, 640], [8, 3, 640, 640]], 'scale_factor': [[1, 2], [1, 2], [8, 2]]}, trt_dynamic_shape_input_data: {'im_shape': [[640.0, 640.0], [640.0, 640.0], [640.0, 640.0, 640.0, 640.0, 640.0, 640.0, 640.0, 640.0, 640.0, 640.0, 640.0, 640.0, 640.0, 640.0, 640.0, 640.0]], 'scale_factor': [[2.0, 2.0], [1.0, 1.0], [0.67, 0.67, 0.67, 0.67, 0.67, 0.67, 0.67, 0.67, 0.67, 0.67, 0.67, 0.67, 0.67, 0.67, 0.67, 0.67]]}, run_mode: paddle, cpu_threads: 8, delete_pass: [], enable_new_ir: True, enable_cinn: False, trt_cfg_setting: {}, trt_use_dynamic_shapes: True, trt_collect_shape_range_info: True, trt_discard_cached_shape_range_info: False, trt_shape_range_info_path: None, trt_allow_rebuild_at_runtime: True
Creating model: ('PP-Chart2Table', None)
Using official model (PP-Chart2Table), the model files will be automatically downloaded and saved in /home/chenlm/.paddlex/official_models.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
/home/chenlm/.conda/envs/paddleocr30/lib/python3.11/site-packages/paddlex/inference/models/doc_vlm/predictor.py:100: UserWarning: The PP-Chart2Table series does not support use_hpip=True for now.
warnings.warn(
Loading configuration file /home/chenlm/.paddlex/official_models/PP-Chart2Table/config.json
Loading weights file /home/chenlm/.paddlex/official_models/PP-Chart2Table/model_state.pdparams
Loaded weights file from disk, setting weights to model.
All model checkpoint weights were used when initializing PPChart2TableInference.
All the weights of PPChart2TableInference were initialized from the model checkpoint at /home/chenlm/.paddlex/official_models/PP-Chart2Table.
If your task is similar to the task the model of the checkpoint was trained on, you can already use PPChart2TableInference for predictions without further training.
Loading configuration file /home/chenlm/.paddlex/official_models/PP-Chart2Table/generation_config.json
Creating model: ('PP-DocBlockLayout', None)
Using official model (PP-DocBlockLayout), the model files will be automatically downloaded and saved in /home/chenlm/.paddlex/official_models.
Inference backend: onnxruntime
Inference backend config: cpu_num_threads=8
[INFO] ultra_infer/runtime/runtime.cc(308)::CreateOrtBackend Runtime initialized with Backend::ORT in Device::CPU.
Creating model: ('PP-DocLayout_plus-L', None)
Using official model (PP-DocLayout_plus-L), the model files will be automatically downloaded and saved in /home/chenlm/.paddlex/official_models.
Inference backend: onnxruntime
Inference backend config: cpu_num_threads=8
[INFO] ultra_infer/runtime/runtime.cc(308)::CreateOrtBackend Runtime initialized with Backend::ORT in Device::CPU.
Creating model: ('PP-LCNet_x0_25_textline_ori', None)
Using official model (PP-LCNet_x0_25_textline_ori), the model files will be automatically downloaded and saved in /home/chenlm/.paddlex/official_models.
Inference backend: openvino
Inference backend config: cpu_num_threads=8
[INFO] ultra_infer/runtime/backends/openvino/ov_backend.cc(371)::InitFromOnnx number of streams:1.
[INFO] ultra_infer/runtime/backends/openvino/ov_backend.cc(375)::InitFromOnnx affinity:YES.
[INFO] ultra_infer/runtime/backends/openvino/ov_backend.cc(387)::InitFromOnnx Compile OpenVINO model on device_name:CPU.
[INFO] ultra_infer/runtime/runtime.cc(283)::CreateOpenVINOBackend Runtime initialized with Backend::OPENVINO in Device::CPU.
Creating model: ('PP-OCRv5_mobile_det', None)
Using official model (PP-OCRv5_mobile_det), the model files will be automatically downloaded and saved in /home/chenlm/.paddlex/official_models.
Inference backend: openvino
Inference backend config: cpu_num_threads=8
[INFO] ultra_infer/runtime/backends/openvino/ov_backend.cc(371)::InitFromOnnx number of streams:1.
[INFO] ultra_infer/runtime/backends/openvino/ov_backend.cc(375)::InitFromOnnx affinity:YES.
[INFO] ultra_infer/runtime/backends/openvino/ov_backend.cc(387)::InitFromOnnx Compile OpenVINO model on device_name:CPU.
[INFO] ultra_infer/runtime/runtime.cc(283)::CreateOpenVINOBackend Runtime initialized with Backend::OPENVINO in Device::CPU.
Creating model: ('PP-OCRv4_mobile_rec', None)
Using official model (PP-OCRv4_mobile_rec), the model files will be automatically downloaded and saved in /home/chenlm/.paddlex/official_models.
Inference backend: openvino
Inference backend config: cpu_num_threads=8
[INFO] ultra_infer/runtime/backends/openvino/ov_backend.cc(371)::InitFromOnnx number of streams:1.
[INFO] ultra_infer/runtime/backends/openvino/ov_backend.cc(375)::InitFromOnnx affinity:YES.
[INFO] ultra_infer/runtime/backends/openvino/ov_backend.cc(387)::InitFromOnnx Compile OpenVINO model on device_name:CPU.
[INFO] ultra_infer/runtime/runtime.cc(283)::CreateOpenVINOBackend Runtime initialized with Backend::OPENVINO in Device::CPU.
Creating model: ('PP-LCNet_x1_0_table_cls', None)
Using official model (PP-LCNet_x1_0_table_cls), the model files will be automatically downloaded and saved in /home/chenlm/.paddlex/official_models.
Inference backend: openvino
Inference backend config: cpu_num_threads=8
[INFO] ultra_infer/runtime/backends/openvino/ov_backend.cc(371)::InitFromOnnx number of streams:1.
[INFO] ultra_infer/runtime/backends/openvino/ov_backend.cc(375)::InitFromOnnx affinity:YES.
[INFO] ultra_infer/runtime/backends/openvino/ov_backend.cc(387)::InitFromOnnx Compile OpenVINO model on device_name:CPU.
[INFO] ultra_infer/runtime/runtime.cc(283)::CreateOpenVINOBackend Runtime initialized with Backend::OPENVINO in Device::CPU.
Creating model: ('SLANeXt_wired', None)
Using official model (SLANeXt_wired), the model files will be automatically downloaded and saved in /home/chenlm/.paddlex/official_models.
The Paddle Inference backend is selected with the default configuration. This may not provide optimal performance.
Using Paddle Inference backend
Paddle predictor option: device_type: cpu, device_id: None, trt_dynamic_shapes: {'x': [[1, 3, 32, 32], [1, 3, 64, 448], [8, 3, 488, 488]]}, run_mode: paddle, cpu_threads: 8, delete_pass: [], enable_new_ir: True, enable_cinn: False, trt_cfg_setting: {}, trt_use_dynamic_shapes: True, trt_collect_shape_range_info: True, trt_discard_cached_shape_range_info: False, trt_dynamic_shape_input_data: None, trt_shape_range_info_path: None, trt_allow_rebuild_at_runtime: True
Creating model: ('SLANet_plus', None)
Using official model (SLANet_plus), the model files will be automatically downloaded and saved in /home/chenlm/.paddlex/official_models.
The Paddle Inference backend is selected with the default configuration. This may not provide optimal performance.
Using Paddle Inference backend
Paddle predictor option: device_type: cpu, device_id: None, trt_dynamic_shapes: {'x': [[1, 3, 32, 32], [1, 3, 64, 448], [8, 3, 488, 488]]}, run_mode: paddle, cpu_threads: 8, delete_pass: [], enable_new_ir: True, enable_cinn: False, trt_cfg_setting: {}, trt_use_dynamic_shapes: True, trt_collect_shape_range_info: True, trt_discard_cached_shape_range_info: False, trt_dynamic_shape_input_data: None, trt_shape_range_info_path: None, trt_allow_rebuild_at_runtime: True
Creating model: ('RT-DETR-L_wired_table_cell_det', None)
Using official model (RT-DETR-L_wired_table_cell_det), the model files will be automatically downloaded and saved in /home/chenlm/.paddlex/official_models.
The Paddle Inference backend is selected with the default configuration. This may not provide optimal performance.
Using Paddle Inference backend
Paddle predictor option: device_type: cpu, device_id: None, trt_dynamic_shapes: {'im_shape': [[1, 2], [1, 2], [8, 2]], 'image': [[1, 3, 640, 640], [1, 3, 640, 640], [8, 3, 640, 640]], 'scale_factor': [[1, 2], [1, 2], [8, 2]]}, trt_dynamic_shape_input_data: {'im_shape': [[640.0, 640.0], [640.0, 640.0], [640.0, 640.0, 640.0, 640.0, 640.0, 640.0, 640.0, 640.0, 640.0, 640.0, 640.0, 640.0, 640.0, 640.0, 640.0, 640.0]], 'scale_factor': [[2.0, 2.0], [1.0, 1.0], [0.67, 0.67, 0.67, 0.67, 0.67, 0.67, 0.67, 0.67, 0.67, 0.67, 0.67, 0.67, 0.67, 0.67, 0.67, 0.67]]}, run_mode: paddle, cpu_threads: 8, delete_pass: [], enable_new_ir: True, enable_cinn: False, trt_cfg_setting: {}, trt_use_dynamic_shapes: True, trt_collect_shape_range_info: True, trt_discard_cached_shape_range_info: False, trt_shape_range_info_path: None, trt_allow_rebuild_at_runtime: True
Creating model: ('RT-DETR-L_wireless_table_cell_det', None)
Using official model (RT-DETR-L_wireless_table_cell_det), the model files will be automatically downloaded and saved in /home/chenlm/.paddlex/official_models.
The Paddle Inference backend is selected with the default configuration. This may not provide optimal performance.
Using Paddle Inference backend
Paddle predictor option: device_type: cpu, device_id: None, trt_dynamic_shapes: {'im_shape': [[1, 2], [1, 2], [8, 2]], 'image': [[1, 3, 640, 640], [1, 3, 640, 640], [8, 3, 640, 640]], 'scale_factor': [[1, 2], [1, 2], [8, 2]]}, trt_dynamic_shape_input_data: {'im_shape': [[640.0, 640.0], [640.0, 640.0], [640.0, 640.0, 640.0, 640.0, 640.0, 640.0, 640.0, 640.0, 640.0, 640.0, 640.0, 640.0, 640.0, 640.0, 640.0, 640.0]], 'scale_factor': [[2.0, 2.0], [1.0, 1.0], [0.67, 0.67, 0.67, 0.67, 0.67, 0.67, 0.67, 0.67, 0.67, 0.67, 0.67, 0.67, 0.67, 0.67, 0.67, 0.67]]}, run_mode: paddle, cpu_threads: 8, delete_pass: [], enable_new_ir: True, enable_cinn: False, trt_cfg_setting: {}, trt_use_dynamic_shapes: True, trt_collect_shape_range_info: True, trt_discard_cached_shape_range_info: False, trt_shape_range_info_path: None, trt_allow_rebuild_at_runtime: True
Creating model: ('PP-Chart2Table', None)
Using official model (PP-Chart2Table), the model files will be automatically downloaded and saved in /home/chenlm/.paddlex/official_models.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
/home/chenlm/.conda/envs/paddleocr30/lib/python3.11/site-packages/paddlex/inference/models/doc_vlm/predictor.py:100: UserWarning: The PP-Chart2Table series does not support use_hpip=True for now.
warnings.warn(
Loading configuration file /home/chenlm/.paddlex/official_models/PP-Chart2Table/config.json
Loading weights file /home/chenlm/.paddlex/official_models/PP-Chart2Table/model_state.pdparams
Loaded weights file from disk, setting weights to model.
All model checkpoint weights were used when initializing PPChart2TableInference.
All the weights of PPChart2TableInference were initialized from the model checkpoint at /home/chenlm/.paddlex/official_models/PP-Chart2Table. If your task is similar to the task the model of the checkpoint was trained on, you can already use PPChart2TableInference for predictions without further training. Loading configuration file /home/chenlm/.paddlex/official_models/PP-Chart2Table/generation_config.json Creating model: ('PP-LCNet_x1_0_doc_ori', None) Using official model (PP-LCNet_x1_0_doc_ori), the model files will be automatically downloaded and saved in /home/chenlm/.paddlex/official_models. The Paddle Inference backend is selected with the default configuration. This may not provide optimal performance. Using Paddle Inference backend Paddle predictor option: device_type: cpu, device_id: None, trt_dynamic_shapes: {'x': [[1, 3, 224, 224], [1, 3, 224, 224], [8, 3, 224, 224]]}, run_mode: paddle, cpu_threads: 8, delete_pass: [], enable_new_ir: True, enable_cinn: False, trt_cfg_setting: {}, trt_use_dynamic_shapes: True, trt_collect_shape_range_info: True, trt_discard_cached_shape_range_info: False, trt_dynamic_shape_input_data: None, trt_shape_range_info_path: None, trt_allow_rebuild_at_runtime: True Creating model: ('PP-LCNet_x0_25_textline_ori', None) Using official model (PP-LCNet_x0_25_textline_ori), the model files will be automatically downloaded and saved in /home/chenlm/.paddlex/official_models. Inference backend: openvino Inference backend config: cpu_num_threads=8 [INFO] ultra_infer/runtime/backends/openvino/ov_backend.cc(371)::InitFromOnnx number of streams:1. [INFO] ultra_infer/runtime/backends/openvino/ov_backend.cc(375)::InitFromOnnx affinity:YES. [INFO] ultra_infer/runtime/backends/openvino/ov_backend.cc(387)::InitFromOnnx Compile OpenVINO model on device_name:CPU. [INFO] ultra_infer/runtime/runtime.cc(283)::CreateOpenVINOBackend Runtime initialized with Backend::OPENVINO in Device::CPU. Creating model: ('PP-OCRv5_server_det', None) Using official model (PP-OCRv5_server_det), the model files will be automatically downloaded and saved in /home/chenlm/.paddlex/official_models. The Paddle Inference backend is selected with the default configuration. This may not provide optimal performance. Using Paddle Inference backend Paddle predictor option: device_type: cpu, device_id: None, trt_dynamic_shapes: {'x': [[1, 3, 32, 32], [1, 3, 736, 736], [1, 3, 4000, 4000]]}, run_mode: paddle, cpu_threads: 8, delete_pass: [], enable_new_ir: True, enable_cinn: False, trt_cfg_setting: {}, trt_use_dynamic_shapes: True, trt_collect_shape_range_info: True, trt_discard_cached_shape_range_info: False, trt_dynamic_shape_input_data: None, trt_shape_range_info_path: None, trt_allow_rebuild_at_runtime: True Creating model: ('PP-OCRv5_server_rec', None) Using official model (PP-OCRv5_server_rec), the model files will be automatically downloaded and saved in /home/chenlm/.paddlex/official_models. The Paddle Inference backend is selected with the default configuration. This may not provide optimal performance. Using Paddle Inference backend Paddle predictor option: device_type: cpu, device_id: None, trt_dynamic_shapes: {'x': [[1, 3, 48, 160], [1, 3, 48, 320], [8, 3, 48, 3200]]}, run_mode: paddle, cpu_threads: 8, delete_pass: [], enable_new_ir: True, enable_cinn: False, trt_cfg_setting: {}, trt_use_dynamic_shapes: True, trt_collect_shape_range_info: True, trt_discard_cached_shape_range_info: False, trt_dynamic_shape_input_data: None, trt_shape_range_info_path: None, trt_allow_rebuild_at_runtime: True 2025-06-06 09:01:38.068 | INFO | ppsV3_service:process_file:55 - 从本地读取文件成功:/home/chenlm/testFile/表格识别测试-存储占用计算.pdf 2025-06-06 09:01:38.431 | INFO | ppsV3_service:process_file:69 - PDF 已转换为 2 张图像。 2025-06-06 09:01:38.431 | INFO | ppsV3_service:process_file:75 - 正在处理图像: img/表格识别测试-存储占用计算_page_1.png
C++ Traceback (most recent call last):
0 paddle::AnalysisPredictor::ZeroCopyRun(bool)
1 paddle::framework::NaiveExecutor::RunInterpreterCore(std::vector<std::string, std::allocator<std::string > > const&, bool, bool)
2 paddle::framework::InterpreterCore::Run(std::vector<std::string, std::allocator<std::string > > const&, bool, bool, bool, bool)
3 paddle::framework::PirInterpreter::Run(std::vector<std::string, std::allocator<std::string > > const&, bool, bool, bool, bool)
4 paddle::framework::PirInterpreter::TraceRunImpl()
5 paddle::framework::PirInterpreter::TraceRunInstructionList(std::vector<std::unique_ptr<paddle::framework::InstructionBase, std::default_deletepaddle::framework::InstructionBase >, std::allocator<std::unique_ptr<paddle::framework::InstructionBase, std::default_deletepaddle::framework::InstructionBase > > > const&)
6 paddle::framework::PirInterpreter::RunInstructionBase(paddle::framework::InstructionBase*)
7 paddle::framework::PhiKernelInstruction::Run()
8 phi::KernelImpl<void ()(phi::CPUContext const&, phi::DenseTensor const&, phi::DenseTensor const&, std::vector<int, std::allocator
Error Message Summary:
FatalError: Segmentation fault is detected by the operating system.
[TimeInfo: *** Aborted at 1749171725 (unix time) try "date -d @1749171725" if you are using GNU date ***]
[SignalInfo: *** SIGSEGV (@0x7f0baaebf480) received by PID 2293412 (TID 0x7f17caffb640) from PID 18446744072282174592 ***]
此外,对于最新版本配置,按照您的指示清楚缓存及onnx模型后,启动ocr服务时(未开始处理pdf文件)仍然发生报错:**RuntimeError: Could not find an implementation for Where(16) node with name 'Where.12'。具体输出内容为:
我的代码为:
import time from paddleocr import PPStructureV3
pipeline = PPStructureV3(use_doc_orientation_classify=False, use_doc_unwarping=False, use_seal_recognition=False, use_table_recognition=True, use_formula_recognition=False, use_chart_recognition=False, use_region_detection=False, device="cpu", text_detection_model_name="PP-OCRv5_mobile_det", text_recognition_model_name="PP-OCRv4_mobile_rec", textline_orientation_model_name="PP-LCNet_x0_25_textline_ori", enable_hpi=True )
time_start = time.time() output = pipeline.predict("/home/chenlm/testFile/test.png") print(f"time cost: {time.time() - time_start}s")
for res in output: res.print() ## 打印预测的结构化输出
经过调整参数,我发现将use_table_recognition置为False时启动正常且可以正常输出。而将该值置为True时,错误似乎正好发生在将
SLANeXt_wired模型转换为onnx模型的时候,如图:
因此,我直接删除SLANeXt_wired模型目录使其重新下载该模型,此时转换该模型时不报错。但是,在后续输出中继续报出:RuntimeError: No inference backend and configuration could be suggested. Reason: 'PP-LCNet_x1_0_textline_ori' is not a known model.
尽管我已经设置textline_orientation_model_name="PP-LCNet_x0_25_textline_ori"。我将红框部分模型目录全部删除并清楚缓存,该错误仍然报出。可以确定的是PP-LCNet_x0_25_textline_ori和PP-LCNet_x1_0_textline_ori模型均是本次运行时新下载的,是否还需要进行其他配置以绕开LCNet_x1_0_textline_ori?
希望可以为你提供有效的信息,谢谢指导!
看起来可能有些历史版本导致的问题,我建议我们从头开始,按照如下方式操作:
-
安装PaddleX 3.0.1、PaddleOCR 3.0.1。执行
paddlex --install paddle2onnx安装正确版本的Paddle2ONNX。直接使用你之前提到的这个环境是可以的:onnx 1.17.0 onnx_graphsurgeon 0.5.8 onnxoptimizer 0.3.13 onnxruntime 1.22.0 paddle2onnx 2.0.2rc3 paddleocr 3.0.1 paddlepaddle 3.0.0 paddlex 3.0.1
-
把
~/.paddlex/official_models中所有涉及到的模型目录删除,让PaddleX自动重新下载新模型。 -
执行如下脚本:
import time from paddleocr import PPStructureV3 pipeline = PPStructureV3(use_doc_orientation_classify=False, use_doc_unwarping=False, use_seal_recognition=False, use_table_recognition=True, use_formula_recognition=False, use_chart_recognition=True, use_region_detection=False, device="cpu", text_detection_model_name="PP-OCRv5_mobile_det", text_recognition_model_name="PP-OCRv4_mobile_rec", textline_orientation_model_name="PP-LCNet_x0_25_textline_ori", enable_hpi = True) output = pipeline.predict("/home/chenlm/testFile/test.png") for res in output: res.print()
如果基于上述配置,仍然遇到段错误,可能是存在我们没有发现的bug,欢迎截图反馈~
如果基于上述配置,仍然遇到段错误,可能是存在我们没有发现的bug,欢迎截图反馈~
段错误目前还不是主要问题,因为更新版本之后还没遇到,或者说还没机会遇到(未运行到文件处理这步),现在卡在RuntimeError: No inference backend and configuration could be suggested. Reason: 'PP-LCNet_x1_0_textline_ori' is not a known model.这里。目前是否还可以绕过该模型,还是直接等待下一次升级?
绕过这个模型的方法我们之前讨论过了:
抱歉给你带来了不便,我们将很快修复,并在预计下周发布的3.0.2版本中体现。目前,建议设置
textline_orientation_model_name="PP-LCNet_x0_25_textline_ori"来绕过这个问题。
可以按照我上面发的步骤尝试一下~
建议
pip install paddle2onnx==2.0.2rc3试试 我本地测试没问题~升级paddle2onnx==2.0.2rc3后运行不报故障,但是处理表格时报出段错误,并且在模型保存路径中发现表格识别相关模型并未转为onnx。昨天得知paddleocr已更新为3.0.1,且在#15465 中您表示onnx问题将在3.0.1版本修复,因此我更新paddleocr版本后卸载onnx相关依赖并重新通过paddleocr install_hpi_deps cpu命令安装高性能插件依赖。我删除所有此前生成的onnx模型并重新运行代码,报出新的错误:RuntimeError: No inference backend and configuration could be suggested. Reason: 'PP-LCNet_x1_0_textline_ori' is not a known model。在模型保存目录中发现相比3.0.0版本新下载了PP-LCNet_x1_0_textline_ori模型。请教下一步应该如何处理,谢谢!
我的环境中相关依赖版本为: paddle2onnx 2.0.2rc3 paddleocr 3.0.1 paddlepaddle 3.0.0 paddlex 3.0.1 onnx 1.17.0 onnx_graphsurgeon 0.5.8 onnxoptimizer 0.3.13 onnxruntime 1.22.0
具体报错内容为: Creating model: ('PP-DocLayout_plus-L', None) Using official model (PP-DocLayout_plus-L), the model files will be automatically downloaded and saved in /home/chenlm/.paddlex/official_models. /home/chenlm/.conda/envs/paddleocr30/lib/python3.11/site-packages/paddle/utils/cpp_extension/extension_utils.py:711: UserWarning: No ccache found. Please be aware that recompiling all source files may be required. You can download and install ccache from: https://github.com/ccache/ccache/blob/master/doc/INSTALL.md warnings.warn(warning_message) Automatically converting PaddlePaddle model to ONNX format Inference backend: onnxruntime Inference backend config: cpu_num_threads=8 [INFO] ultra_infer/runtime/runtime.cc(308)::CreateOrtBackend Runtime initialized with Backend::ORT in Device::CPU. Creating model: ('PP-LCNet_x1_0_textline_ori', None) Using official model (PP-LCNet_x1_0_textline_ori), the model files will be automatically downloaded and saved in /home/chenlm/.paddlex/official_models. Traceback (most recent call last): File "/home/chenlm/srdkb-loader/loaders/pdf_loader/ppsV3_service.py", line 115, in ocr_service = OCRV3Service() ^^^^^^^^^^^^^^ File "/home/chenlm/srdkb-loader/loaders/pdf_loader/ppsV3_service.py", line 20, in init self.ocr_engine = PPStructureV3(use_doc_orientation_classify=False, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/chenlm/.conda/envs/paddleocr30/lib/python3.11/site-packages/paddleocr/_pipelines/pp_structurev3.py", line 96, in init super().init(**kwargs) File "/home/chenlm/.conda/envs/paddleocr30/lib/python3.11/site-packages/paddleocr/_pipelines/base.py", line 63, in init self.paddlex_pipeline = self._create_paddlex_pipeline() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/chenlm/.conda/envs/paddleocr30/lib/python3.11/site-packages/paddleocr/_pipelines/base.py", line 97, in _create_paddlex_pipeline return create_pipeline(config=self._merged_paddlex_config, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/chenlm/.conda/envs/paddleocr30/lib/python3.11/site-packages/paddlex/inference/pipelines/init.py", line 165, in create_pipeline pipeline = BasePipeline.get(pipeline_name)( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/chenlm/.conda/envs/paddleocr30/lib/python3.11/site-packages/paddlex/utils/deps.py", line 195, in _wrapper return old_init_func(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/chenlm/.conda/envs/paddleocr30/lib/python3.11/site-packages/paddlex/inference/pipelines/_parallel.py", line 103, in init self._pipeline = self._create_internal_pipeline(config, self.device) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/chenlm/.conda/envs/paddleocr30/lib/python3.11/site-packages/paddlex/inference/pipelines/_parallel.py", line 158, in _create_internal_pipeline return self._pipeline_cls( ^^^^^^^^^^^^^^^^^^^ File "/home/chenlm/.conda/envs/paddleocr30/lib/python3.11/site-packages/paddlex/inference/pipelines/layout_parsing/pipeline_v2.py", line 83, in init self.inintial_predictor(config) File "/home/chenlm/.conda/envs/paddleocr30/lib/python3.11/site-packages/paddlex/inference/pipelines/layout_parsing/pipeline_v2.py", line 160, in inintial_predictor self.general_ocr_pipeline = self.create_pipeline( ^^^^^^^^^^^^^^^^^^^^^ File "/home/chenlm/.conda/envs/paddleocr30/lib/python3.11/site-packages/paddlex/inference/pipelines/base.py", line 140, in create_pipeline pipeline = create_pipeline( ^^^^^^^^^^^^^^^^ File "/home/chenlm/.conda/envs/paddleocr30/lib/python3.11/site-packages/paddlex/inference/pipelines/init.py", line 165, in create_pipeline pipeline = BasePipeline.get(pipeline_name)( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/chenlm/.conda/envs/paddleocr30/lib/python3.11/site-packages/paddlex/utils/deps.py", line 195, in _wrapper return old_init_func(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/chenlm/.conda/envs/paddleocr30/lib/python3.11/site-packages/paddlex/inference/pipelines/_parallel.py", line 103, in init self._pipeline = self._create_internal_pipeline(config, self.device) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/chenlm/.conda/envs/paddleocr30/lib/python3.11/site-packages/paddlex/inference/pipelines/_parallel.py", line 158, in _create_internal_pipeline return self._pipeline_cls( ^^^^^^^^^^^^^^^^^^^ File "/home/chenlm/.conda/envs/paddleocr30/lib/python3.11/site-packages/paddlex/inference/pipelines/ocr/pipeline.py", line 83, in init self.textline_orientation_model = self.create_model( ^^^^^^^^^^^^^^^^^^ File "/home/chenlm/.conda/envs/paddleocr30/lib/python3.11/site-packages/paddlex/inference/pipelines/base.py", line 107, in create_model model = create_predictor( ^^^^^^^^^^^^^^^^^ File "/home/chenlm/.conda/envs/paddleocr30/lib/python3.11/site-packages/paddlex/inference/models/init.py", line 77, in create_predictor return BasePredictor.get(model_name)( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/chenlm/.conda/envs/paddleocr30/lib/python3.11/site-packages/paddlex/inference/models/image_classification/predictor.py", line 49, in init self.preprocessors, self.infer, self.postprocessors = self._build() ^^^^^^^^^^^^^ File "/home/chenlm/.conda/envs/paddleocr30/lib/python3.11/site-packages/paddlex/inference/models/image_classification/predictor.py", line 82, in _build infer = self.create_static_infer() ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/chenlm/.conda/envs/paddleocr30/lib/python3.11/site-packages/paddlex/inference/models/base/predictor/base_predictor.py", line 242, in create_static_infer return HPInfer( ^^^^^^^^ File "/home/chenlm/.conda/envs/paddleocr30/lib/python3.11/site-packages/paddlex/utils/deps.py", line 148, in _wrapper return old_init_func(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/chenlm/.conda/envs/paddleocr30/lib/python3.11/site-packages/paddlex/inference/models/common/static_infer.py", line 575, in init backend, backend_config = self._determine_backend_and_config() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/chenlm/.conda/envs/paddleocr30/lib/python3.11/site-packages/paddlex/inference/models/common/static_infer.py", line 630, in _determine_backend_and_config raise RuntimeError( RuntimeError: No inference backend and configuration could be suggested. Reason: 'PP-LCNet_x1_0_textline_ori' is not a known model.
当前使用代码为: PPStructureV3(use_doc_orientation_classify=False, use_doc_unwarping=False, use_seal_recognition=False, use_table_recognition=True, use_formula_recognition=False, use_chart_recognition=False, use_region_detection=False, device="cpu", text_detection_model_name="PP-OCRv5_mobile_det", text_recognition_model_name="PP-OCRv4_mobile_rec", enable_hpi=True )
我在使用CPU版本的高性能模式时,遇到了相同的问题,我根据报错信息,在 /(python_path)/dist-packages/paddlex/inference/utils 目录下找到了 hpi_model_info_collection.json 这个文件,在其中添加
"PP-LCNet_x1_0_textline_ori": [
"openvino",
"onnxruntime",
"paddle"
],
和
"PP-LCNet_x1_0_textline_ori": [
"tensorrt",
"paddle_tensorrt",
"onnxruntime"
],
之后,便能正常将 PP-LCNet_x1_0_textline_ori 模型转换为ONNX模型
再次使用CPU版本的高性能模式进行推理,一切正常
考虑到在新版本中问题应当已经解决,我将先关闭这个issue。大家有其他问题欢迎reopen或者重新提issue~