intel-extension-for-pytorch
intel-extension-for-pytorch copied to clipboard
Does BFloat16 model training supported on ARC?
Describe the issue
Hi, I tried to finetune Llama2 on ARC with BFloat16 data type and AdamW optimizer using transformers.Trainer, but I met RuntimeError: parameter in optimizer(Adamw) is not FP32, need check
from intel_extension_for_pytorch/optim/_functional.py", line 1256, in adamw_step
, so I want to confirm that does BFloat16 model training supported on ARC now? If not, could you please add this feature?
PS: I'm using dev/QLLM branch commit id d600370d44882ae8e15949452fe1d9f324cf6900