FATE-LLM icon indicating copy to clipboard operation
FATE-LLM copied to clipboard

FATE-LLM模型训练报错FP16 Mixed问题

Open zapjone opened this issue 9 months ago • 3 comments

按照https://github.com/FederatedAI/FATE-LLM/blob/main/doc/tutorial/parameter_efficient_llm/ChatGLM3-6B_ds.ipynb教程进行训练模型时,在提交任务后,出现FP16报错的情况——在client的docker容器中提交的,也加入了FATE-LLM/python到PYTHONPATH环境变量中。请问下各位大佬,这个该怎么解决呢?谢谢了。 FP16 Mixed precision trainning with AMP or APEX('--fp16') and FP16 half precision evaluation('--fp16_full_eval') can only be used on CUDA or NPU devices or certain XPU devices (with IPEX)

zapjone avatar May 14 '24 08:05 zapjone