PaddleX icon indicating copy to clipboard operation
PaddleX copied to clipboard

paddle2onnx报错

Open congyao123456 opened this issue 5 months ago • 1 comments

使用官方给的指令(windows系统): paddlex
--paddle2onnx
--paddle_model_dir ./ResNet18
--onnx_model_dir ./ResNet18
报错如下:ImportError: DLL load failed while importing paddle2onnx_cpp2py_export: 找不到指定的程序。 Paddle2ONNX conversion failed with exit code 1

具体如下: (paddle_env) D:\project\PaddleX-release-3.0>paddlex --paddle2onnx --paddle_model_dir output/best_model/inference --onnx_model_dir ./onnx Input dir: output\best_model\inference Output dir: onnx Paddle2ONNX conversion starting... 信息: 用提供的模式无法找到文件。 C:\Users\sunjian\Anaconda3\envs\paddle_env\lib\site-packages\paddle\utils\cpp_extension\extension_utils.py:711: UserWarning: No ccache found. Please be aware that recompiling all source files may be required. You can download and install ccache from: https://github.com/ccache/ccache/blob/master/doc/INSTALL.md warnings.warn(warning_message) Traceback (most recent call last): File "C:\Users\sunjian\Anaconda3\envs\paddle_env\lib\runpy.py", line 194, in _run_module_as_main return run_code(code, main_globals, None, File "C:\Users\sunjian\Anaconda3\envs\paddle_env\lib\runpy.py", line 87, in run_code exec(code, run_globals) File "C:\Users\sunjian\Anaconda3\envs\paddle_env\Scripts\paddle2onnx.exe_main.py", line 4, in File "C:\Users\sunjian\Anaconda3\envs\paddle_env\lib\site-packages\paddle2onnx_init.py", line 47, in from .convert import export # noqa: F401 File "C:\Users\sunjian\Anaconda3\envs\paddle_env\lib\site-packages\paddle2onnx\convert.py", line 18, in import paddle2onnx.paddle2onnx_cpp2py_export as c_p2o ImportError: DLL load failed while importing paddle2onnx_cpp2py_export: 找不到指定的程序。 Paddle2ONNX conversion failed with exit code 1

congyao123456 avatar May 29 '25 11:05 congyao123456

有安装paddle2onnx吗?paddle2onnx版本是多少?

zhang-prog avatar Jun 03 '25 12:06 zhang-prog

The issue has no response for a long time and will be closed. You can reopen or new another issue if are still confused.


From Bot

TingquanGao avatar Jul 05 '25 03:07 TingquanGao

Automatically converting PaddlePaddle model to ONNX format Automatically converting PaddlePaddle model to ONNX format Automatically converting PaddlePaddle model to ONNX format Automatically converting PaddlePaddle model to ONNX format Automatically converting PaddlePaddle model to ONNX format Fetching 6 files: 100%|██████████| 6/6 [00:06<00:00, 1.14s/it] Encounter exception when download model from huggingface: Destination path '/root/.paddlex/official_models/PP-LCNet_x1_0_doc_ori/temp_dir' already exists. PaddleX would try to download from BOS. Automatically converting PaddlePaddle model to ONNX format Automatically converting PaddlePaddle model to ONNX format Process Process-9: Traceback (most recent call last): File "/root/lanyun-tmp/paddlex/lib/python3.10/site-packages/paddlex/inference/models/common/static_infer.py", line 720, in _build_ui_runtime subprocess.run( File "/root/lanyun-tmp/paddlex/lib/python3.10/subprocess.py", line 526, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['paddlex', '--paddle2onnx', '--paddle_model_dir', '/root/.paddlex/official_models/PP-LCNet_x1_0_doc_ori', '--onnx_model_dir', '/root/.paddlex/official_models/PP-LCNet_x1_0_doc_ori']' returned non-zero exit status 1.

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "/root/lanyun-tmp/paddlex/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/root/lanyun-tmp/paddlex/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/root/lanyun-tmp/mp_infer.py", line 23, in worker pipeline = create_pipeline(pipeline_name_or_config_path, device=device, use_hpip=True) File "/root/lanyun-tmp/paddlex/lib/python3.10/site-packages/paddlex/inference/pipelines/init.py", line 166, in create_pipeline pipeline = BasePipeline.get(pipeline_name)( File "/root/lanyun-tmp/paddlex/lib/python3.10/site-packages/paddlex/utils/deps.py", line 195, in _wrapper return old_init_func(self, *args, **kwargs) File "/root/lanyun-tmp/paddlex/lib/python3.10/site-packages/paddlex/inference/pipelines/_parallel.py", line 103, in init self._pipeline = self._create_internal_pipeline(config, self.device) File "/root/lanyun-tmp/paddlex/lib/python3.10/site-packages/paddlex/inference/pipelines/_parallel.py", line 158, in _create_internal_pipeline return self._pipeline_cls( File "/root/lanyun-tmp/paddlex/lib/python3.10/site-packages/paddlex/inference/pipelines/layout_parsing/pipeline_v2.py", line 82, in init self.inintial_predictor(config) File "/root/lanyun-tmp/paddlex/lib/python3.10/site-packages/paddlex/inference/pipelines/layout_parsing/pipeline_v2.py", line 120, in inintial_predictor self.doc_preprocessor_pipeline = self.create_pipeline( File "/root/lanyun-tmp/paddlex/lib/python3.10/site-packages/paddlex/inference/pipelines/base.py", line 140, in create_pipeline pipeline = create_pipeline( File "/root/lanyun-tmp/paddlex/lib/python3.10/site-packages/paddlex/inference/pipelines/init.py", line 166, in create_pipeline pipeline = BasePipeline.get(pipeline_name)( File "/root/lanyun-tmp/paddlex/lib/python3.10/site-packages/paddlex/utils/deps.py", line 195, in _wrapper return old_init_func(self, *args, **kwargs) File "/root/lanyun-tmp/paddlex/lib/python3.10/site-packages/paddlex/inference/pipelines/_parallel.py", line 103, in init self._pipeline = self._create_internal_pipeline(config, self.device) File "/root/lanyun-tmp/paddlex/lib/python3.10/site-packages/paddlex/inference/pipelines/_parallel.py", line 158, in _create_internal_pipeline return self._pipeline_cls( File "/root/lanyun-tmp/paddlex/lib/python3.10/site-packages/paddlex/inference/pipelines/doc_preprocessor/pipeline.py", line 67, in init self.doc_ori_classify_model = self.create_model(doc_ori_classify_config) File "/root/lanyun-tmp/paddlex/lib/python3.10/site-packages/paddlex/inference/pipelines/base.py", line 107, in create_model model = create_predictor( File "/root/lanyun-tmp/paddlex/lib/python3.10/site-packages/paddlex/inference/models/init.py", line 77, in create_predictor return BasePredictor.get(model_name)( File "/root/lanyun-tmp/paddlex/lib/python3.10/site-packages/paddlex/inference/models/image_classification/predictor.py", line 49, in init self.preprocessors, self.infer, self.postprocessors = self._build() File "/root/lanyun-tmp/paddlex/lib/python3.10/site-packages/paddlex/inference/models/image_classification/predictor.py", line 82, in _build infer = self.create_static_infer() File "/root/lanyun-tmp/paddlex/lib/python3.10/site-packages/paddlex/inference/models/base/predictor/base_predictor.py", line 242, in create_static_infer return HPInfer( File "/root/lanyun-tmp/paddlex/lib/python3.10/site-packages/paddlex/utils/deps.py", line 148, in _wrapper return old_init_func(self, *args, **kwargs) File "/root/lanyun-tmp/paddlex/lib/python3.10/site-packages/paddlex/inference/models/common/static_infer.py", line 576, in init ui_runtime = self._build_ui_runtime(backend, backend_config) File "/root/lanyun-tmp/paddlex/lib/python3.10/site-packages/paddlex/inference/models/common/static_infer.py", line 734, in _build_ui_runtime raise RuntimeError( RuntimeError: PaddlePaddle-to-ONNX conversion failed: Input dir: /root/.paddlex/official_models/PP-LCNet_x1_0_doc_ori Output dir: /root/.paddlex/official_models/PP-LCNet_x1_0_doc_ori Paddle2ONNX conversion starting... Paddle2ONNX conversion failed with exit code -7

Process Process-8: Traceback (most recent call last): File "/root/lanyun-tmp/paddlex/lib/python3.10/site-packages/paddlex/inference/models/common/static_infer.py", line 720, in _build_ui_runtime subprocess.run( File "/root/lanyun-tmp/paddlex/lib/python3.10/subprocess.py", line 526, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['paddlex', '--paddle2onnx', '--paddle_model_dir', '/root/.paddlex/official_models/PP-LCNet_x1_0_doc_ori', '--onnx_model_dir', '/root/.paddlex/official_models/PP-LCNet_x1_0_doc_ori']' returned non-zero exit status 1.

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "/root/lanyun-tmp/paddlex/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/root/lanyun-tmp/paddlex/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/root/lanyun-tmp/mp_infer.py", line 23, in worker pipeline = create_pipeline(pipeline_name_or_config_path, device=device, use_hpip=True) File "/root/lanyun-tmp/paddlex/lib/python3.10/site-packages/paddlex/inference/pipelines/init.py", line 166, in create_pipeline pipeline = BasePipeline.get(pipeline_name)( File "/root/lanyun-tmp/paddlex/lib/python3.10/site-packages/paddlex/utils/deps.py", line 195, in _wrapper return old_init_func(self, *args, **kwargs) File "/root/lanyun-tmp/paddlex/lib/python3.10/site-packages/paddlex/inference/pipelines/_parallel.py", line 103, in init self._pipeline = self._create_internal_pipeline(config, self.device) File "/root/lanyun-tmp/paddlex/lib/python3.10/site-packages/paddlex/inference/pipelines/_parallel.py", line 158, in _create_internal_pipeline return self._pipeline_cls( File "/root/lanyun-tmp/paddlex/lib/python3.10/site-packages/paddlex/inference/pipelines/layout_parsing/pipeline_v2.py", line 82, in init self.inintial_predictor(config) File "/root/lanyun-tmp/paddlex/lib/python3.10/site-packages/paddlex/inference/pipelines/layout_parsing/pipeline_v2.py", line 120, in inintial_predictor self.doc_preprocessor_pipeline = self.create_pipeline( File "/root/lanyun-tmp/paddlex/lib/python3.10/site-packages/paddlex/inference/pipelines/base.py", line 140, in create_pipeline pipeline = create_pipeline( File "/root/lanyun-tmp/paddlex/lib/python3.10/site-packages/paddlex/inference/pipelines/init.py", line 166, in create_pipeline pipeline = BasePipeline.get(pipeline_name)( File "/root/lanyun-tmp/paddlex/lib/python3.10/site-packages/paddlex/utils/deps.py", line 195, in _wrapper return old_init_func(self, *args, **kwargs) File "/root/lanyun-tmp/paddlex/lib/python3.10/site-packages/paddlex/inference/pipelines/_parallel.py", line 103, in init self._pipeline = self._create_internal_pipeline(config, self.device) File "/root/lanyun-tmp/paddlex/lib/python3.10/site-packages/paddlex/inference/pipelines/_parallel.py", line 158, in _create_internal_pipeline return self._pipeline_cls( File "/root/lanyun-tmp/paddlex/lib/python3.10/site-packages/paddlex/inference/pipelines/doc_preprocessor/pipeline.py", line 67, in init self.doc_ori_classify_model = self.create_model(doc_ori_classify_config) File "/root/lanyun-tmp/paddlex/lib/python3.10/site-packages/paddlex/inference/pipelines/base.py", line 107, in create_model model = create_predictor( File "/root/lanyun-tmp/paddlex/lib/python3.10/site-packages/paddlex/inference/models/init.py", line 77, in create_predictor return BasePredictor.get(model_name)( File "/root/lanyun-tmp/paddlex/lib/python3.10/site-packages/paddlex/inference/models/image_classification/predictor.py", line 49, in init self.preprocessors, self.infer, self.postprocessors = self._build() File "/root/lanyun-tmp/paddlex/lib/python3.10/site-packages/paddlex/inference/models/image_classification/predictor.py", line 82, in _build infer = self.create_static_infer() File "/root/lanyun-tmp/paddlex/lib/python3.10/site-packages/paddlex/inference/models/base/predictor/base_predictor.py", line 242, in create_static_infer return HPInfer( File "/root/lanyun-tmp/paddlex/lib/python3.10/site-packages/paddlex/utils/deps.py", line 148, in _wrapper return old_init_func(self, *args, **kwargs) File "/root/lanyun-tmp/paddlex/lib/python3.10/site-packages/paddlex/inference/models/common/static_infer.py", line 576, in init ui_runtime = self._build_ui_runtime(backend, backend_config) File "/root/lanyun-tmp/paddlex/lib/python3.10/site-packages/paddlex/inference/models/common/static_infer.py", line 734, in _build_ui_runtime raise RuntimeError( RuntimeError: PaddlePaddle-to-ONNX conversion failed: Input dir: /root/.paddlex/official_models/PP-LCNet_x1_0_doc_ori Output dir: /root/.paddlex/official_models/PP-LCNet_x1_0_doc_ori Paddle2ONNX conversion starting... Paddle2ONNX conversion failed with exit code -7

Inference backend: tensorrt Inference backend: tensorrt Inference backend config: precision='fp16' use_dynamic_shapes=True dynamic_shapes={'x': [[1, 3, 224, 224], [1, 3, 224, 224], [8, 3, 224, 224]]} Inference backend config: precision='fp16' use_dynamic_shapes=True dynamic_shapes={'x': [[1, 3, 224, 224], [1, 3, 224, 224], [8, 3, 224, 224]]} Inference backend: tensorrt Inference backend config: precision='fp16' use_dynamic_shapes=True dynamic_shapes={'x': [[1, 3, 224, 224], [1, 3, 224, 224], [8, 3, 224, 224]]} Inference backend: tensorrt Inference backend config: precision='fp16' use_dynamic_shapes=True dynamic_shapes={'x': [[1, 3, 224, 224], [1, 3, 224, 224], [8, 3, 224, 224]]} Inference backend: tensorrt Inference backend config: precision='fp16' use_dynamic_shapes=True dynamic_shapes={'x': [[1, 3, 224, 224], [1, 3, 224, 224], [8, 3, 224, 224]]} Inference backend: tensorrt Inference backend config: precision='fp16' use_dynamic_shapes=True dynamic_shapes={'x': [[1, 3, 224, 224], [1, 3, 224, 224], [8, 3, 224, 224]]} [INFO] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(567)::BuildTrtEngine [TrtBackend] Use FP16 to inference. [INFO] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(572)::BuildTrtEngine Start to building TensorRT Engine... [INFO] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(567)::BuildTrtEngine [TrtBackend] Use FP16 to inference. [INFO] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(572)::BuildTrtEngine Start to building TensorRT Engine... [INFO] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(567)::BuildTrtEngine [TrtBackend] Use FP16 to inference. [INFO] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(572)::BuildTrtEngine Start to building TensorRT Engine... [INFO] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(567)::BuildTrtEngine [TrtBackend] Use FP16 to inference. [INFO] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(572)::BuildTrtEngine Start to building TensorRT Engine... [INFO] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(567)::BuildTrtEngine [TrtBackend] Use FP16 to inference. [INFO] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(572)::BuildTrtEngine Start to building TensorRT Engine... [INFO] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(567)::BuildTrtEngine [TrtBackend] Use FP16 to inference. [INFO] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(572)::BuildTrtEngine Start to building TensorRT Engine...

我也有相同问题,报错如上。环境如下: (1)RTX 3090 四卡 (2)ubuntu, cuda 11.8, python 3.10 (3)按指引安装了paddlepaddle(对应cuda 11.8版本), paddlex, hpi-gpu和paddle2onnx

hyp530 avatar Jul 19 '25 09:07 hyp530

Creating model: ('PP-DocBlockLayout', None) Using official model (PP-DocBlockLayout), the model files will be automatically downloaded and saved in /root/.paddlex/official_models. Fetching 6 files: 100%|██████████| 6/6 [00:00<00:00, 3172.70it/s] Automatically converting PaddlePaddle model to ONNX format Inference backend: tensorrt Inference backend config: precision='fp32' use_dynamic_shapes=True dynamic_shapes={'im_shape': [[1, 2], [1, 2], [8, 2]], 'image': [[1, 3, 640, 640], [1, 3, 640, 640], [8, 3, 640, 640]], 'scale_factor': [[1, 2], [1, 2], [8, 2]]} [INFO] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(572)::BuildTrtEngine Start to building TensorRT Engine... Inference backend: tensorrt Inference backend config: precision='fp32' use_dynamic_shapes=True dynamic_shapes={'im_shape': [[1, 2], [1, 2], [8, 2]], 'image': [[1, 3, 640, 640], [1, 3, 640, 640], [8, 3, 640, 640]], 'scale_factor': [[1, 2], [1, 2], [8, 2]]} [INFO] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(572)::BuildTrtEngine Start to building TensorRT Engine... Inference backend: tensorrt Inference backend config: precision='fp32' use_dynamic_shapes=True dynamic_shapes={'im_shape': [[1, 2], [1, 2], [8, 2]], 'image': [[1, 3, 640, 640], [1, 3, 640, 640], [8, 3, 640, 640]], 'scale_factor': [[1, 2], [1, 2], [8, 2]]} Inference backend: tensorrt Inference backend config: precision='fp32' use_dynamic_shapes=True dynamic_shapes={'im_shape': [[1, 2], [1, 2], [8, 2]], 'image': [[1, 3, 640, 640], [1, 3, 640, 640], [8, 3, 640, 640]], 'scale_factor': [[1, 2], [1, 2], [8, 2]]} [INFO] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(572)::BuildTrtEngine Start to building TensorRT Engine... [INFO] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(572)::BuildTrtEngine Start to building TensorRT Engine... Inference backend: tensorrt Inference backend config: precision='fp32' use_dynamic_shapes=True dynamic_shapes={'im_shape': [[1, 2], [1, 2], [8, 2]], 'image': [[1, 3, 640, 640], [1, 3, 640, 640], [8, 3, 640, 640]], 'scale_factor': [[1, 2], [1, 2], [8, 2]]} [INFO] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(572)::BuildTrtEngine Start to building TensorRT Engine... Inference backend: tensorrt Inference backend config: precision='fp32' use_dynamic_shapes=True dynamic_shapes={'im_shape': [[1, 2], [1, 2], [8, 2]], 'image': [[1, 3, 640, 640], [1, 3, 640, 640], [8, 3, 640, 640]], 'scale_factor': [[1, 2], [1, 2], [8, 2]]} [INFO] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(572)::BuildTrtEngine Start to building TensorRT Engine... [INFO] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(659)::BuildTrtEngine TensorRT Engine is built successfully. [INFO] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(661)::BuildTrtEngine Serialize TensorRTEngine to local file /root/.paddlex/official_models/PP-DocBlockLayout/.cache/tensorrt/trt_serialized.trt. [INFO] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(672)::BuildTrtEngine TensorRTEngine is serialized to local file /root/.paddlex/official_models/PP-DocBlockLayout/.cache/tensorrt/trt_serialized.trt, we can load this model from the serialized engine directly next time. [ERROR] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(239)::log 3: [runtime.cpp::~Runtime::346] Error Code 3: API Usage Error (Parameter check failed at: runtime/rt/runtime.cpp::~Runtime::346, condition: mEngineCounter.use_count() == 1. Destroying a runtime before destroying deserialized engines created by the runtime leads to undefined behavior. ) [INFO] ultra_infer/runtime/runtime.cc(320)::CreateTrtBackend Runtime initialized with Backend::TRT in Device::GPU. Creating model: ('PP-DocLayout_plus-L', None) Using official model (PP-DocLayout_plus-L), the model files will be automatically downloaded and saved in /root/.paddlex/official_models. Fetching 6 files: 100%|██████████| 6/6 [00:09<00:00, 1.65s/it] Automatically converting PaddlePaddle model to ONNX format [INFO] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(659)::BuildTrtEngine TensorRT Engine is built successfully. [INFO] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(661)::BuildTrtEngine Serialize TensorRTEngine to local file /root/.paddlex/official_models/PP-DocBlockLayout/.cache/tensorrt/trt_serialized.trt. [INFO] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(672)::BuildTrtEngine TensorRTEngine is serialized to local file /root/.paddlex/official_models/PP-DocBlockLayout/.cache/tensorrt/trt_serialized.trt, we can load this model from the serialized engine directly next time. [ERROR] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(239)::log 3: [runtime.cpp::~Runtime::346] Error Code 3: API Usage Error (Parameter check failed at: runtime/rt/runtime.cpp::~Runtime::346, condition: mEngineCounter.use_count() == 1. Destroying a runtime before destroying deserialized engines created by the runtime leads to undefined behavior. ) [INFO] ultra_infer/runtime/runtime.cc(320)::CreateTrtBackend Runtime initialized with Backend::TRT in Device::GPU. Creating model: ('PP-DocLayout_plus-L', None) Using official model (PP-DocLayout_plus-L), the model files will be automatically downloaded and saved in /root/.paddlex/official_models. Fetching 6 files: 100%|██████████| 6/6 [00:00<00:00, 1155.56it/s] Automatically converting PaddlePaddle model to ONNX format Inference backend: tensorrt Inference backend config: precision='fp16' use_dynamic_shapes=True dynamic_shapes={'im_shape': [[1, 2], [1, 2], [8, 2]], 'image': [[1, 3, 800, 800], [1, 3, 800, 800], [8, 3, 800, 800]], 'scale_factor': [[1, 2], [1, 2], [8, 2]]} [INFO] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(567)::BuildTrtEngine [TrtBackend] Use FP16 to inference. [INFO] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(572)::BuildTrtEngine Start to building TensorRT Engine... Inference backend: tensorrt Inference backend config: precision='fp16' use_dynamic_shapes=True dynamic_shapes={'im_shape': [[1, 2], [1, 2], [8, 2]], 'image': [[1, 3, 800, 800], [1, 3, 800, 800], [8, 3, 800, 800]], 'scale_factor': [[1, 2], [1, 2], [8, 2]]} [INFO] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(567)::BuildTrtEngine [TrtBackend] Use FP16 to inference. [INFO] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(572)::BuildTrtEngine Start to building TensorRT Engine... [INFO] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(659)::BuildTrtEngine TensorRT Engine is built successfully. [INFO] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(661)::BuildTrtEngine Serialize TensorRTEngine to local file /root/.paddlex/official_models/PP-DocBlockLayout/.cache/tensorrt/trt_serialized.trt. [INFO] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(672)::BuildTrtEngine TensorRTEngine is serialized to local file /root/.paddlex/official_models/PP-DocBlockLayout/.cache/tensorrt/trt_serialized.trt, we can load this model from the serialized engine directly next time. [ERROR] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(239)::log 3: [runtime.cpp::~Runtime::346] Error Code 3: API Usage Error (Parameter check failed at: runtime/rt/runtime.cpp::~Runtime::346, condition: mEngineCounter.use_count() == 1. Destroying a runtime before destroying deserialized engines created by the runtime leads to undefined behavior. ) [INFO] ultra_infer/runtime/runtime.cc(320)::CreateTrtBackend Runtime initialized with Backend::TRT in Device::GPU. Creating model: ('PP-DocLayout_plus-L', None) Using official model (PP-DocLayout_plus-L), the model files will be automatically downloaded and saved in /root/.paddlex/official_models. [INFO] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(659)::BuildTrtEngine TensorRT Engine is built successfully. [INFO] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(661)::BuildTrtEngine Serialize TensorRTEngine to local file /root/.paddlex/official_models/PP-DocBlockLayout/.cache/tensorrt/trt_serialized.trt. Fetching 6 files: 100%|██████████| 6/6 [00:00<00:00, 1748.36it/s] Inference backend: tensorrt Inference backend config: precision='fp16' use_dynamic_shapes=True dynamic_shapes={'im_shape': [[1, 2], [1, 2], [8, 2]], 'image': [[1, 3, 800, 800], [1, 3, 800, 800], [8, 3, 800, 800]], 'scale_factor': [[1, 2], [1, 2], [8, 2]]} [INFO] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(672)::BuildTrtEngine TensorRTEngine is serialized to local file /root/.paddlex/official_models/PP-DocBlockLayout/.cache/tensorrt/trt_serialized.trt, we can load this model from the serialized engine directly next time. [ERROR] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(239)::log 3: [runtime.cpp::~Runtime::346] Error Code 3: API Usage Error (Parameter check failed at: runtime/rt/runtime.cpp::~Runtime::346, condition: mEngineCounter.use_count() == 1. Destroying a runtime before destroying deserialized engines created by the runtime leads to undefined behavior. ) [INFO] ultra_infer/runtime/runtime.cc(320)::CreateTrtBackend Runtime initialized with Backend::TRT in Device::GPU. Creating model: ('PP-DocLayout_plus-L', None) Using official model (PP-DocLayout_plus-L), the model files will be automatically downloaded and saved in /root/.paddlex/official_models. [INFO] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(567)::BuildTrtEngine [TrtBackend] Use FP16 to inference. [INFO] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(572)::BuildTrtEngine Start to building TensorRT Engine... Fetching 6 files: 100%|██████████| 6/6 [00:00<00:00, 1312.22it/s] Inference backend: tensorrt Inference backend config: precision='fp16' use_dynamic_shapes=True dynamic_shapes={'im_shape': [[1, 2], [1, 2], [8, 2]], 'image': [[1, 3, 800, 800], [1, 3, 800, 800], [8, 3, 800, 800]], 'scale_factor': [[1, 2], [1, 2], [8, 2]]} [INFO] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(567)::BuildTrtEngine [TrtBackend] Use FP16 to inference. [INFO] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(572)::BuildTrtEngine Start to building TensorRT Engine... [INFO] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(659)::BuildTrtEngine TensorRT Engine is built successfully. [INFO] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(661)::BuildTrtEngine Serialize TensorRTEngine to local file /root/.paddlex/official_models/PP-DocBlockLayout/.cache/tensorrt/trt_serialized.trt. [INFO] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(672)::BuildTrtEngine TensorRTEngine is serialized to local file /root/.paddlex/official_models/PP-DocBlockLayout/.cache/tensorrt/trt_serialized.trt, we can load this model from the serialized engine directly next time. [ERROR] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(239)::log 3: [runtime.cpp::~Runtime::346] Error Code 3: API Usage Error (Parameter check failed at: runtime/rt/runtime.cpp::~Runtime::346, condition: mEngineCounter.use_count() == 1. Destroying a runtime before destroying deserialized engines created by the runtime leads to undefined behavior. ) [INFO] ultra_infer/runtime/runtime.cc(320)::CreateTrtBackend Runtime initialized with Backend::TRT in Device::GPU. Creating model: ('PP-DocLayout_plus-L', None) Using official model (PP-DocLayout_plus-L), the model files will be automatically downloaded and saved in /root/.paddlex/official_models. Fetching 6 files: 100%|██████████| 6/6 [00:00<00:00, 1345.84it/s] Inference backend: tensorrt Inference backend config: precision='fp16' use_dynamic_shapes=True dynamic_shapes={'im_shape': [[1, 2], [1, 2], [8, 2]], 'image': [[1, 3, 800, 800], [1, 3, 800, 800], [8, 3, 800, 800]], 'scale_factor': [[1, 2], [1, 2], [8, 2]]} [INFO] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(659)::BuildTrtEngine TensorRT Engine is built successfully. [INFO] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(661)::BuildTrtEngine Serialize TensorRTEngine to local file /root/.paddlex/official_models/PP-DocBlockLayout/.cache/tensorrt/trt_serialized.trt. [INFO] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(567)::BuildTrtEngine [TrtBackend] Use FP16 to inference. [INFO] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(572)::BuildTrtEngine Start to building TensorRT Engine... [INFO] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(672)::BuildTrtEngine TensorRTEngine is serialized to local file /root/.paddlex/official_models/PP-DocBlockLayout/.cache/tensorrt/trt_serialized.trt, we can load this model from the serialized engine directly next time. [ERROR] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(239)::log 3: [runtime.cpp::~Runtime::346] Error Code 3: API Usage Error (Parameter check failed at: runtime/rt/runtime.cpp::~Runtime::346, condition: mEngineCounter.use_count() == 1. Destroying a runtime before destroying deserialized engines created by the runtime leads to undefined behavior. ) [INFO] ultra_infer/runtime/runtime.cc(320)::CreateTrtBackend Runtime initialized with Backend::TRT in Device::GPU. Creating model: ('PP-DocLayout_plus-L', None) Using official model (PP-DocLayout_plus-L), the model files will be automatically downloaded and saved in /root/.paddlex/official_models. Fetching 6 files: 100%|██████████| 6/6 [00:00<00:00, 1843.92it/s] Inference backend: tensorrt Inference backend config: precision='fp16' use_dynamic_shapes=True dynamic_shapes={'im_shape': [[1, 2], [1, 2], [8, 2]], 'image': [[1, 3, 800, 800], [1, 3, 800, 800], [8, 3, 800, 800]], 'scale_factor': [[1, 2], [1, 2], [8, 2]]} [INFO] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(567)::BuildTrtEngine [TrtBackend] Use FP16 to inference. [INFO] ultra_infer/runtime/backends/tensorrt/trt_backend.cc(572)::BuildTrtEngine Start to building TensorRT Engine...

以上是更多报错。

hyp530 avatar Jul 19 '25 09:07 hyp530

你解决了吗?我使用paddle2onnx时也遇到了ImportError: DLL load failed while importing paddle2onnx_cpp2py_export: 找不到指定的程序。

jdbsid avatar Sep 29 '25 02:09 jdbsid

你解决了吗?我使用paddle2onnx时也遇到了ImportError: DLL load failed while importing paddle2onnx_cpp2py_export: 找不到指定的程序。

试试给paddle2onnx换成1.3.1版(用pip直接安装的是2.0.2rc3,可能是该版本太新了),实测在python 3.9,visualdl 3.0.0-beta,paddlepaddle-gpu==3.2.0,gradio==3.11.0,gradio-client==1.3.0,httpx==0.24.1 httpcore==0.15.* h11==0.12.* 环境下,能正常以命令行方式运行visualdl。

AnthonyBvvd avatar Oct 06 '25 09:10 AnthonyBvvd