[BUG] Failed to load processor: No module named 'transformers_modules.minicpm_4'.
是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?
- [x] 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions
该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?
- [x] 我已经搜索过FAQ | I have searched FAQ
当前行为 | Current Behavior
can not load preprocessor while folowing the code from https://minicpm-o.readthedocs.io/en/latest/finetune/llamafactory.html
期望行为 | Expected Behavior
No response
复现方法 | Steps To Reproduce
No response
运行环境 | Environment
- OS: Ubuntu 22.04
- Python: 3.10
- Transformers: 4.55
- PyTorch: 2.8
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`): 12.8
备注 | Anything else?
No response
@ZMXJJ Please Help answer questions about using llama-factory.
@WiedenWei It seems like the model hasn't been fully downloaded. Please make sure your local model is completely downloaded.
The model's download path must not contain any .. The . character will lead to failures during the dynamic importing process at runtime. For example, you can rename the folder/directory to MiniCPM-V-4_5.
Thanks for your reply! I test the code on the cloud paltform, it works. So, the problem may due to network issue while downloading the models. Can I set HF_ENDPOINT=https://hf-mirror.com to avoid network issue?
Yes, absolutely! Setting HF_ENDPOINT=https://hf-mirror.com is a perfect and super common way to avoid those pesky network issues when downloading models. Also, as an alternative, you might want to try downloading models via ModelScope.