Qwen-VL icon indicating copy to clipboard operation
Qwen-VL copied to clipboard

RuntimeError: "_amp_foreach_non_finite_check_and_unscale_cuda" not implemented for 'BFloat16'

Open Qinger27 opened this issue 1 year ago • 1 comments

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

  • [X] 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

  • [X] 我已经搜索过FAQ | I have searched FAQ

当前行为 | Current Behavior

使用自有数据做 finetune 报错 RuntimeError: "_amp_foreach_non_finite_check_and_unscale_cuda" not implemented for 'BFloat16' image

期望行为 | Expected Behavior

No response

复现方法 | Steps To Reproduce

No response

运行环境 | Environment

- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):

备注 | Anything else?

No response

Qinger27 avatar Apr 26 '24 03:04 Qinger27

我也遇到这个问题 是不是代码中 model.to(torch.bfloat16)

apachemycat avatar May 07 '24 07:05 apachemycat