LMFlow icon indicating copy to clipboard operation
LMFlow copied to clipboard

Is it necessary to use Ampere GPU?

Open GitHubJKin opened this issue 2 years ago • 2 comments

运行环境是colab,gpu是tsla T4,torch版本是2.0,CUDA版本是11.8,训练模型的时候出现这个报错:ValueError: Your setup doesn't support bf16/gpu. You need torch>=1.10, using Ampere GPU with cuda>=11.0

GitHubJKin avatar Apr 17 '23 06:04 GitHubJKin

我自己家里的3060ti能运行吗,不过是8g显存,运行是是否会溢出。

GitHubJKin avatar Apr 17 '23 06:04 GitHubJKin

这是因为T4不支持bf16. 可以将运行命令中的bf16 改为fp16. Ampere GPU对于训练来说不是必需的。 8G显存无法训练LLaMA-7B. 但是可以尝试一下Galactica-1.3B等小一些的模型。


This is because T4 does not support bf16. You can change bf16 to fp16 in the running command. 8GB of memory cannot train LLaMA-7B, but you can try smaller models like Galactica-1.3B.

shizhediao avatar Apr 19 '23 06:04 shizhediao

This issue has been marked as stale because it has not had recent activity. If you think this still needs to be addressed please feel free to reopen this issue. Thanks

shizhediao avatar May 15 '23 00:05 shizhediao