ChatGLM-Tuning icon indicating copy to clipboard operation
ChatGLM-Tuning copied to clipboard

bs设置为1和3的时候,loss都是0, 是哪里有问题吗?

Open phantommlin opened this issue 1 year ago • 14 comments

{'loss': 0.0, 'learning_rate': 1.9e-05, 'epoch': 0.15}
{'loss': 0.0, 'learning_rate': 1.8e-05, 'epoch': 0.3}
{'loss': 0.0, 'learning_rate': 1.7e-05, 'epoch': 0.45}
{'loss': 0.0, 'learning_rate': 1.6000000000000003e-05, 'epoch': 0.6}
{'loss': 0.0, 'learning_rate': 1.5000000000000002e-05, 'epoch': 0.75}
{'loss': 0.0, 'learning_rate': 1.4e-05, 'epoch': 0.9}

phantommlin avatar Mar 20 '23 02:03 phantommlin

@phantommlin hello,你的loss为0问题解决了吗

TccccD avatar Mar 20 '23 05:03 TccccD

没有哦

phantommlin avatar Mar 20 '23 09:03 phantommlin

可能和硬件有关: https://github.com/mymusise/ChatGLM-Tuning/issues/19

mymusise avatar Mar 20 '23 12:03 mymusise

我用P40训练,batch_size等于1时,loss也是0,请问您解决了吗? {"epoch": 0.0, "learning_rate": 1.9980769230769233e-05, "loss": 0.0, "step": 50}, {"epoch": 0.0, "learning_rate": 1.9961538461538464e-05, "loss": 0.0, "step": 100}, {"epoch": 0.0, "learning_rate": 1.9942307692307695e-05, "loss": 0.0, "step": 150}

更新:batch_size等于2时,step=50时,loss不为0,后续都是0,感觉像是个bug {"epoch":0.0,"learning_rate":1.9980769230769233e-05,"loss":1.6446,"step":50}, {"epoch":0.0,"learning_rate":1.9961538461538464e-05,"loss":0.0,"step":100}, {"epoch":0.01,"learning_rate":1.9942307692307695e-05,"loss":0.0,"step":150}, {"epoch":0.01,"learning_rate":1.9923076923076926e-05,"loss":0.0,"step":200}

Adherer avatar Mar 23 '23 12:03 Adherer

我也遇到这个问题了,显卡是v100,目前查到是在 modeling_chatGLM::SelfAttention::forward(): output = self.dense(context_layer) # 这一行

output的结果中 有inf和-inf, self.dense 的类型是<class 'bitsandbytes.nn.modules.Linear8bitLt'>,初步看上去是 in8量化的问题,Linear8bitLt 内的实现太复杂了,还没看明白。真正原因还没还知道,困扰了2天了。。。。

image

zhangzhenhu avatar Mar 24 '23 06:03 zhangzhenhu

我也遇到这个问题了,显卡是v100,目前查到是在 modeling_chatGLM::SelfAttention::forward(): output = self.dense(context_layer) # 这一行

output的结果中 有inf和-inf, self.dense 的类型是<class 'bitsandbytes.nn.modules.Linear8bitLt'>,初步看上去是 in8量化的问题,Linear8bitLt 内的实现太复杂了,还没看明白。真正原因还没还知道,困扰了2天了。。。。

image

这个问题我解决了,解决办法是脚本中启用fp16,加载模型时,load_in_8bit设置成False,即可正常训练,具体原因我也没查出来是为啥

Adherer avatar Mar 24 '23 06:03 Adherer

启用fp16, load_in_8bit设置为False, 会出现以下报错: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm)

不启用fp16, load_in_8bit设为True时,正常运行,loss一直为0.

SizhaoXu avatar Mar 28 '23 06:03 SizhaoXu

启用fp16, load_in_8bit设置为False, 会出现以下报错: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm)

不启用fp16, load_in_8bit设为True时,正常运行,loss一直为0.

问题已解决,更新了peft就可以work了

SizhaoXu avatar Mar 30 '23 03:03 SizhaoXu

@SizhaoXu bro,“不启用fp16, load_in_8bit设为True时,正常运行,loss一直为0.”这个可以吗

chuckhope avatar Mar 30 '23 10:03 chuckhope

启用fp16, load_in_8bit设置为False, 会出现以下报错: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm) 不启用fp16, load_in_8bit设为True时,正常运行,loss一直为0.

问题已解决,更新了peft就可以work了

你好,更新peft到什么版本呢,我已经是v0.2.0了

dominicqi avatar Apr 14 '23 10:04 dominicqi

启用fp16, load_in_8bit设置为False, 会出现以下报错: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm) 不启用fp16, load_in_8bit设为True时,正常运行,loss一直为0.

问题已解决,更新了peft就可以work了

请问您更新的peft是什么版本?

iMountTai avatar Apr 22 '23 10:04 iMountTai

更新peft到最新版本么0.3.0.dev0么?

moseshu avatar Apr 30 '23 01:04 moseshu

启用fp16, load_in_8bit设置为False, 会出现以下报错: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

这个错怎么解决

guotong1988 avatar Jun 20 '23 09:06 guotong1988

启用fp16, load_in_8bit设置为False, 会出现以下报错: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm) 不启用fp16, load_in_8bit设为True时,正常运行,loss一直为0.

问题已解决,更新了peft就可以work了

现在用的是什么版本的peft?

Cosjimmy avatar Nov 13 '23 09:11 Cosjimmy