mPLUG-Owl
mPLUG-Owl copied to clipboard
ValueError: Your setup doesn't support bf16/gpu. You need torch>=1.10, using Ampere GPU with cuda>=11.0
ValueError: Your setup doesn't support bf16/gpu. You need torch>=1.10, using Ampere GPU with cuda>=11.0 How to solve this problem? When I set bf16 flag to Flase. Another issue happens "RuntimeError: "erfinv_vml_cpu" not implemented for 'Half'". Thanks.
You can initalize the model on cpu with float32. After the initialization, convert the model into half then put it on GPU.
Set bf16 to Flase and set low_cpu_mem_usage=True
may work.
@YuchenLiu98 Hi, Liu. Sorry to bother you. I've met the same error as yours. After initializing the model on cpu with float32 and convert the model into half then put it on GPU, a GPU out of memory ERROR occurred. So have you solved this problem? Please let me know, thanks!
@MAGAer13, I set bf16 to False and low_cpu_mem_usage to True but still getting the error which says ""erfinv_vml_cpu" not implemented for 'Half'". Can you please help?