MiniCPM-V icon indicating copy to clipboard operation
MiniCPM-V copied to clipboard

[BUG] Failed to load processor: No module named 'transformers_modules.minicpm_4'.

Open WiedenWei opened this issue 3 months ago • 5 comments

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

  • [x] 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

  • [x] 我已经搜索过FAQ | I have searched FAQ

当前行为 | Current Behavior

can not load preprocessor while folowing the code from https://minicpm-o.readthedocs.io/en/latest/finetune/llamafactory.html

期望行为 | Expected Behavior

No response

复现方法 | Steps To Reproduce

No response

运行环境 | Environment

- OS: Ubuntu 22.04
- Python: 3.10
- Transformers: 4.55
- PyTorch: 2.8
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`): 12.8

备注 | Anything else?

No response

WiedenWei avatar Sep 01 '25 10:09 WiedenWei

@ZMXJJ Please Help answer questions about using llama-factory.

tc-mb avatar Sep 02 '25 03:09 tc-mb

@WiedenWei It seems like the model hasn't been fully downloaded. Please make sure your local model is completely downloaded.

ZMXJJ avatar Sep 02 '25 07:09 ZMXJJ

The model's download path must not contain any .. The . character will lead to failures during the dynamic importing process at runtime. For example, you can rename the folder/directory to MiniCPM-V-4_5.

YuzaChongyi avatar Sep 02 '25 08:09 YuzaChongyi

Thanks for your reply! I test the code on the cloud paltform, it works. So, the problem may due to network issue while downloading the models. Can I set HF_ENDPOINT=https://hf-mirror.com to avoid network issue?

WiedenWei avatar Sep 03 '25 03:09 WiedenWei

Yes, absolutely! Setting HF_ENDPOINT=https://hf-mirror.com is a perfect and super common way to avoid those pesky network issues when downloading models. Also, as an alternative, you might want to try downloading models via ModelScope.

ZMXJJ avatar Sep 10 '25 03:09 ZMXJJ