BallonsTranslator
BallonsTranslator copied to clipboard
Bug Report: YSG not load
Version Info
Python version: 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)] Python executable: G:\Ballon-translator-portable-main\python\python.exe Version: 1.4.0 Branch: dev Commit hash: 6171e42ff6f96c1d6fb89f6ba2f18ba687d3f7a1 Device name: NVIDIA GeForce RTX 3060 Cuda is available: True Cuda version: 11.8 ZLUDA is available: False
Description of the Problem
- Deleted the config file
- Reinstalled from scratch
- Tried to fix it myself, but didn't quite understand (the AI didn't really understand either)
Text Detector
YSGYOLO
OCR
None
Inpaint
None
Translator
None
Screenshot
No response
Logs
[INFO ] launch:main:235 - set display language to English
[INFO ] module_manager:merge_config_module_params:569 - Reorder param dict in config
[INFO ] mainwindow:on_finish_settranslator:296 - Translator set to google
[INFO ] mainwindow:on_finish_setdetector:268 - Text detector set to ctd
[INFO ] mainwindow:on_finish_setinpainter:286 - Inpainter set to lama_large_512px
[INFO ] mainwindow:on_finish_setocr:277 - OCR set to mit48px
Traceback (most recent call last):
File "G:\Ballon-translator-portable-main\ui\module_parse_widgets.py", line 363, in on_module_changed
self.updateModuleParamWidget()
File "G:\Ballon-translator-portable-main\ui\module_parse_widgets.py", line 354, in updateModuleParamWidget
widget = ParamWidget(params, scrollWidget=self)
File "G:\Ballon-translator-portable-main\ui\module_parse_widgets.py", line 249, in __init__
raise ValueError(f"Failed to initialize widget for key: {param_key}")
ValueError: Failed to initialize widget for key: model path
Additional Information
No response
try 23cb098 if it didn't fix it, it should print more elaborate info
[INFO ] mainwindow:on_finish_setinpainter:286 - Inpainter set to lama_large_512px
Traceback (most recent call last):
File "G:\Ballon-translator-portable-main\ui\module_parse_widgets.py", line 363, in on_module_changed
self.updateModuleParamWidget()
File "G:\Ballon-translator-portable-main\ui\module_parse_widgets.py", line 354, in updateModuleParamWidget
widget = ParamWidget(params, scrollWidget=self)
File "G:\Ballon-translator-portable-main\ui\module_parse_widgets.py", line 249, in __init__
raise ValueError(f"Failed to initialize widget for key-value pair: {param_key}-" + params[param_key])
TypeError: can only concatenate str (not "dict") to str
Deleted the config file
Did you try it and still the error occur? Also please try 58f0461, and show the new error msg
Now it starts, tries to open the model, does not find it and falls. Can you just make the default state null? While the condition is null, the model is not loaded.
[INFO ] mainwindow:on_finish_setdetector:268 - Text detector set to ctd
[WARNING] detector_ysg:_load_model:94 - data/models/ysgyolo_v11_x.pt does not exist, try fall back to default value data/models/ysgyolo_v11_x.pt
[ERROR ] __init__:create_error_dialog:29 - [Errno 2] No such file or directory: 'data\\models\\ysgyolo_v11_x.pt'
Failed to set module.
[ERROR ] __init__:create_error_dialog:30 - Traceback (most recent call last):
File "G:\Ballon-translator-portable-main\ui\module_manager.py", line 58, in _set_module
self.module.load_model()
File "G:\Ballon-translator-portable-main\modules\base.py", line 143, in load_model
self._load_model()
File "G:\Ballon-translator-portable-main\modules\textdetector\detector_ysg.py", line 102, in _load_model
self.model = MODEL(model_path).to(device=self.get_param_value('device'))
File "G:\Ballon-translator-portable-main\python\lib\site-packages\ultralytics\models\yolo\model.py", line 23, in __init__
super().__init__(model=model, task=task, verbose=verbose)
File "G:\Ballon-translator-portable-main\python\lib\site-packages\ultralytics\engine\model.py", line 148, in __init__
self._load(model, task=task)
File "G:\Ballon-translator-portable-main\python\lib\site-packages\ultralytics\engine\model.py", line 290, in _load
self.model, self.ckpt = attempt_load_one_weight(weights)
File "G:\Ballon-translator-portable-main\python\lib\site-packages\ultralytics\nn\tasks.py", line 1039, in attempt_load_one_weight
ckpt, weight = torch_safe_load(weight) # load ckpt
File "G:\Ballon-translator-portable-main\python\lib\site-packages\ultralytics\nn\tasks.py", line 944, in torch_safe_load
ckpt = torch.load(file, map_location="cpu")
File "G:\Ballon-translator-portable-main\python\lib\site-packages\ultralytics\utils\patches.py", line 86, in torch_load
return _torch_load(*args, **kwargs)
File "G:\Ballon-translator-portable-main\python\lib\site-packages\torch\serialization.py", line 998, in load
with _open_file_like(f, 'rb') as opened_file:
File "G:\Ballon-translator-portable-main\python\lib\site-packages\torch\serialization.py", line 445, in _open_file_like
return _open_file(name_or_buffer, mode)
File "G:\Ballon-translator-portable-main\python\lib\site-packages\torch\serialization.py", line 426, in __init__
super().__init__(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: 'data\\models\\ysgyolo_v11_x.pt'
YSGYoloDetector has deleted ysgyolo_v11_x.pt, and code needs to be modified before it can be used.
The model is hardcoded in the code.... @dmMaze
If you remove the model, it gives this:
ERROR ] __init__:create_error_dialog:29 - model='' should be a *.pt PyTorch model to run this method, but is a different format. PyTorch models can train, val, predict and export, i.e. 'model.train(data=...)', but exported formats like ONNX, TensorRT etc. only support 'predict' and 'val' modes, i.e. 'yolo predict model=yolo11n.onnx'.
To run CUDA or MPS inference please pass the device argument directly in your inference command, i.e. 'model.predict(source=..., device=0)'
Failed to set module.
[ERROR ] __init__:create_error_dialog:30 - Traceback (most recent call last):
File "G:\Ballon-translator-portable-main\ui\module_manager.py", line 58, in _set_module
self.module.load_model()
File "G:\Ballon-translator-portable-main\modules\base.py", line 143, in load_model
self._load_model()
File "G:\Ballon-translator-portable-main\modules\textdetector\detector_ysg.py", line 102, in _load_model
self.model = MODEL(model_path).to(device=self.get_param_value('device'))
File "G:\Ballon-translator-portable-main\python\lib\site-packages\torch\nn\modules\module.py", line 1152, in to
return self._apply(convert)
File "G:\Ballon-translator-portable-main\python\lib\site-packages\ultralytics\engine\model.py", line 870, in _apply
self._check_is_pytorch_model()
File "G:\Ballon-translator-portable-main\python\lib\site-packages\ultralytics\engine\model.py", line 323, in _check_is_pytorch_model
raise TypeError(
TypeError: model='' should be a *.pt PyTorch model to run this method, but is a different format. PyTorch models can train, val, predict and export, i.e. 'model.train(data=...)', but exported formats like ONNX, TensorRT etc. only support 'predict' and 'val' modes, i.e. 'yolo predict model=yolo11n.onnx'.
To run CUDA or MPS inference please pass the device argument directly in your inference command, i.e. 'model.predict(source=..., device=0)'
Bug still not fixed