PaddleOCR
PaddleOCR copied to clipboard
No available model hosting platforms detected. Please check your network connection
🔎 Search before asking
- [x] I have searched the PaddleOCR Docs and found no similar bug report.
- [x] I have searched the PaddleOCR Issues and found no similar bug report.
- [x] I have searched the PaddleOCR Discussions and found no similar bug report.
🐛 Bug (问题描述)
I have already set text_recognition_model_dir to a local directory explicitly and yet it still raise hosting errors and trying to connect network.
ocr = PaddleOCR(
text_recognition_model_dir="PP-/mnt/posfs/globalmount/PP-OCRv5_server_rec",
use_doc_orientation_classify=False, # Use use_doc_orientation_classify to enable/disable document orientation classification model
use_doc_unwarping=False, # Use use_doc_unwarping to enable/disable document unwarping module
use_textline_orientation=False, # Use use_textline_orientation to enable/disable textline orientation classification model
device="cpu", # Use device to specify GPU for model inference
)
Creating model: ('PP-OCRv5_server_det', None) Using official model (PP-OCRv5_server_det), the model files will be automatically downloaded and saved in /root/.paddlex/official_models. No available model hosting platforms detected. Please check your network connection. Traceback (most recent call last): File "
", line 1, in File "/usr/local/lib/python3.10/site-packages/paddleocr/_pipelines/ocr.py", line 163, in init super().init(**base_params) File "/usr/local/lib/python3.10/site-packages/paddleocr/_pipelines/base.py", line 67, in init self.paddlex_pipeline = self._create_paddlex_pipeline() File "/usr/local/lib/python3.10/site-packages/paddleocr/_pipelines/base.py", line 102, in _create_paddlex_pipeline return create_pipeline(config=self._merged_paddlex_config, **kwargs) File "/usr/local/lib/python3.10/site-packages/paddlex/inference/pipelines/init.py", line 166, in create_pipeline pipeline = BasePipeline.get(pipeline_name)( File "/usr/local/lib/python3.10/site-packages/paddlex/utils/deps.py", line 202, in _wrapper return old_init_func(self, *args, **kwargs) File "/usr/local/lib/python3.10/site-packages/paddlex/inference/pipelines/_parallel.py", line 103, in init self._pipeline = self._create_internal_pipeline(config, self.device) File "/usr/local/lib/python3.10/site-packages/paddlex/inference/pipelines/_parallel.py", line 158, in _create_internal_pipeline return self._pipeline_cls( File "/usr/local/lib/python3.10/site-packages/paddlex/inference/pipelines/ocr/pipeline.py", line 117, in init self.text_det_model = self.create_model( File "/usr/local/lib/python3.10/site-packages/paddlex/inference/pipelines/base.py", line 105, in create_model model = create_predictor( File "/usr/local/lib/python3.10/site-packages/paddlex/inference/models/init.py", line 69, in create_predictor model_dir = official_models[model_name] File "/usr/local/lib/python3.10/site-packages/paddlex/inference/utils/official_models.py", line 577, in getitem return self._get_model_local_path(model_name) File "/usr/local/lib/python3.10/site-packages/paddlex/inference/utils/official_models.py", line 552, in _get_model_local_path raise Exception(msg) Exception: No available model hosting platforms detected. Please check your network connection.
🏃♂️ Environment (运行环境)
paddleocr==3.2.0 paddlepaddle==3.2.0 paddlex==3.2.0
🌰 Minimal Reproducible Example (最小可复现问题的Demo)
ocr = PaddleOCR( text_recognition_model_dir="your local path containing model and config", use_doc_orientation_classify=False, # Use use_doc_orientation_classify to enable/disable document orientation classification model use_doc_unwarping=False, # Use use_doc_unwarping to enable/disable document unwarping module use_textline_orientation=False, # Use use_textline_orientation to enable/disable textline orientation classification model device="cpu", # Use device to specify GPU for model inference )
There were similar bugs reported with no solution yet. It would be a great help to actually support local model.
Initialization requires all parameters to be defined to point to different model directories on the local machine in order to execute within the intranet
To run offline, you need to specify the directories of both text_detection_model_dir and text_recognition_model_dir. Since you only specified recognition model, it will try to connect to online platforms by default.
To run offline, you need to specify the directories of both
text_detection_model_dirandtext_recognition_model_dir. Since you only specified recognition model, it will try to connect to online platforms by default.
Does this method solve your problem?