zero_nlp
zero_nlp copied to clipboard
chatglm6b_v2 单机多卡训练found at least two devices, cuda:1 and cuda:0!
在3090 24G内存运行chatglm2啥也没改报 如下错误: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0! (when checking argument for argument target in method wrapper_CUDA_nll_loss_forward)
请务必查看我的readme.md里面的07-17日里面写的【注意】⚠️
如果代码也该好了,依然有这个问题。你再检查一下transformers的版本,建议安装最新版本pip install transformers
@yuanzhoulvpi2017 ok了,谢谢!
同谢,已解决报错
改了代码,同时也升级了transformers,为啥还是出现这个问题呢