[Regression] Model support check for specified device no longer working
🔎 Search before asking
- [x] I have searched the PaddleOCR Docs and found no similar bug report.
- [x] I have searched the PaddleOCR Issues and found no similar bug report.
- [x] I have searched the PaddleOCR Discussions and found no similar bug report.
🐛 Bug (问题描述)
(venv-3.12) PS C:\Users\GPUVM\Desktop\New folder (14)> paddleocr ocr -i "Path\to\image" --lang ch --use_doc_orientation_classify False --use_doc_unwarping False --use_textline_orientation false --device dcu
Results in the following error:
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "C:\Users\GPUVM\Desktop\New folder (14)\venv-3.12\Scripts\paddleocr.exe\__main__.py", line 7, in <module>
File "C:\Users\GPUVM\Desktop\New folder (14)\venv-3.12\Lib\site-packages\paddleocr\__main__.py", line 26, in console_entry
main()
File "C:\Users\GPUVM\Desktop\New folder (14)\venv-3.12\Lib\site-packages\paddleocr\_cli.py", line 124, in main
_execute(args)
File "C:\Users\GPUVM\Desktop\New folder (14)\venv-3.12\Lib\site-packages\paddleocr\_cli.py", line 113, in _execute
args.executor(args)
File "C:\Users\GPUVM\Desktop\New folder (14)\venv-3.12\Lib\site-packages\paddleocr\_pipelines\ocr.py", line 614, in execute_with_args
perform_simple_inference(PaddleOCR, params)
File "C:\Users\GPUVM\Desktop\New folder (14)\venv-3.12\Lib\site-packages\paddleocr\_utils\cli.py", line 62, in perform_simple_inference
wrapper = wrapper_cls(**init_params)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\GPUVM\Desktop\New folder (14)\venv-3.12\Lib\site-packages\paddleocr\_pipelines\ocr.py", line 161, in __init__
super().__init__(**base_params)
File "C:\Users\GPUVM\Desktop\New folder (14)\venv-3.12\Lib\site-packages\paddleocr\_pipelines\base.py", line 66, in __init__
self.paddlex_pipeline = self._create_paddlex_pipeline()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\GPUVM\Desktop\New folder (14)\venv-3.12\Lib\site-packages\paddleocr\_pipelines\base.py", line 99, in _create_paddlex_pipeline
kwargs = prepare_common_init_args(None, self._common_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\GPUVM\Desktop\New folder (14)\venv-3.12\Lib\site-packages\paddleocr\_common_args.py", line 75, in prepare_common_init_args
pp_option = PaddlePredictorOption(
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\GPUVM\Desktop\New folder (14)\venv-3.12\Lib\site-packages\paddlex\inference\utils\pp_option.py", line 74, in __init__
self._init_option(**kwargs)
File "C:\Users\GPUVM\Desktop\New folder (14)\venv-3.12\Lib\site-packages\paddlex\inference\utils\pp_option.py", line 104, in _init_option
setattr(self, k, v)
File "C:\Users\GPUVM\Desktop\New folder (14)\venv-3.12\Lib\site-packages\paddlex\inference\utils\pp_option.py", line 214, in device_type
check_supported_device_type(device_type, self.model_name)
File "C:\Users\GPUVM\Desktop\New folder (14)\venv-3.12\Lib\site-packages\paddlex\utils\device.py", line 135, in check_supported_device_type
assert model_name in DCU_WHITELIST, (
^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: The DCU device does not yet support `None` model!You could set env `PADDLE_PDX_DISABLE_DEV_MODEL_WL` to `true` to disable this checking.
This worked previously correctly in version 3.0.1:
Traceback (most recent call last):
File "C:\Users\gpuvm\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\gpuvm\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "C:\Users\gpuvm\Desktop\New folder (2)\venv-3.10\Scripts\paddleocr.exe\__main__.py", line 7, in <module>
File "C:\Users\gpuvm\Desktop\New folder (2)\venv-3.10\lib\site-packages\paddleocr\__main__.py", line 26, in console_entry
main()
File "C:\Users\gpuvm\Desktop\New folder (2)\venv-3.10\lib\site-packages\paddleocr\_cli.py", line 124, in main
_execute(args)
File "C:\Users\gpuvm\Desktop\New folder (2)\venv-3.10\lib\site-packages\paddleocr\_cli.py", line 113, in _execute
args.executor(args)
File "C:\Users\gpuvm\Desktop\New folder (2)\venv-3.10\lib\site-packages\paddleocr\_pipelines\ocr.py", line 598, in execute_with_args
perform_simple_inference_ocr(PaddleOCR, params)
File "C:\Users\gpuvm\Desktop\New folder (2)\venv-3.10\lib\site-packages\paddleocr\_utils\cli.py", line 107, in perform_simple_inference_ocr
wrapper = wrapper_cls(**init_params)
File "C:\Users\gpuvm\Desktop\New folder (2)\venv-3.10\lib\site-packages\paddleocr\_pipelines\ocr.py", line 161, in __init__
super().__init__(**base_params)
File "C:\Users\gpuvm\Desktop\New folder (2)\venv-3.10\lib\site-packages\paddleocr\_pipelines\base.py", line 63, in __init__
self.paddlex_pipeline = self._create_paddlex_pipeline()
File "C:\Users\gpuvm\Desktop\New folder (2)\venv-3.10\lib\site-packages\paddleocr\_pipelines\base.py", line 97, in _create_paddlex_pipeline
return create_pipeline(config=self._merged_paddlex_config, **kwargs)
File "C:\Users\gpuvm\Desktop\New folder (2)\venv-3.10\lib\site-packages\paddlex\inference\pipelines\__init__.py", line 165, in create_pipeline
pipeline = BasePipeline.get(pipeline_name)(
File "C:\Users\gpuvm\Desktop\New folder (2)\venv-3.10\lib\site-packages\paddlex\utils\deps.py", line 195, in _wrapper
return old_init_func(self, *args, **kwargs)
File "C:\Users\gpuvm\Desktop\New folder (2)\venv-3.10\lib\site-packages\paddlex\inference\pipelines\_parallel.py", line 103, in __init__
self._pipeline = self._create_internal_pipeline(config, self.device)
File "C:\Users\gpuvm\Desktop\New folder (2)\venv-3.10\lib\site-packages\paddlex\inference\pipelines\_parallel.py", line 158, in _create_internal_pipeline
return self._pipeline_cls(
File "C:\Users\gpuvm\Desktop\New folder (2)\venv-3.10\lib\site-packages\paddlex\inference\pipelines\ocr\pipeline.py", line 114, in __init__
self.text_det_model = self.create_model(
File "C:\Users\gpuvm\Desktop\New folder (2)\venv-3.10\lib\site-packages\paddlex\inference\pipelines\base.py", line 109, in create_model
model = create_predictor(
File "C:\Users\gpuvm\Desktop\New folder (2)\venv-3.10\lib\site-packages\paddlex\inference\models\__init__.py", line 77, in create_predictor
return BasePredictor.get(model_name)(
File "C:\Users\gpuvm\Desktop\New folder (2)\venv-3.10\lib\site-packages\paddlex\inference\models\text_detection\predictor.py", line 48, in __init__
super().__init__(*args, **kwargs)
File "C:\Users\gpuvm\Desktop\New folder (2)\venv-3.10\lib\site-packages\paddlex\inference\models\base\predictor\base_predictor.py", line 121, in __init__
self._pp_option = self._prepare_pp_option(pp_option, device)
File "C:\Users\gpuvm\Desktop\New folder (2)\venv-3.10\lib\site-packages\paddlex\inference\models\base\predictor\base_predictor.py", line 341, in _prepare_pp_option
pp_option.device_type = device_info[0]
File "C:\Users\gpuvm\Desktop\New folder (2)\venv-3.10\lib\site-packages\paddlex\inference\utils\pp_option.py", line 181, in device_type
check_supported_device_type(device_type, self.model_name)
File "C:\Users\gpuvm\Desktop\New folder (2)\venv-3.10\lib\site-packages\paddlex\utils\device.py", line 171, in check_supported_device_type
assert model_name in DCU_WHITELIST, (
AssertionError: The DCU device does not yet support `PP-OCRv5_server_det` model!You could set env `PADDLE_PDX_DISABLE_DEV_MODEL_WL` to `true` to disable this checking.
The commit that introced this issue is 5c9f3b4. The problem is that this new code: https://github.com/PaddlePaddle/PaddleOCR/blob/4602329be9432db4328f28a3e16a04a9eb8e823e/paddleocr/_common_args.py#L75-L77
initializes the device before the model_name is set. This means that this assertion check can no longer work correctly! Switching back to the old version, where the device is parsed via kwargs, the assertion works correctly again.
🏃♂️ Environment (运行环境)
OS: Windows 11 PaddleOCR 3.0.2 PaddlePaddle 3.0.0 (CPU version) 16GB RAM GPU: Nvidia GTX 1660 TI Installed via pip in a venv with Python 3.12
🌰 Minimal Reproducible Example (最小可复现问题的Demo)
Explained above
@TingquanGao PTAL
Thx for feedback. We have confirmed that there is a bug here. And we will fix this issue as soon as possible. Before this, you could set PADDLE_PDX_DISABLE_DEV_MODEL_WL=1 to bypass the error.
The issue has no response for a long time and will be closed. You can reopen or new another issue if are still confused.
From Bot