ChatGLM-Tuning
ChatGLM-Tuning copied to clipboard
bs设置为1和3的时候,loss都是0, 是哪里有问题吗?
{'loss': 0.0, 'learning_rate': 1.9e-05, 'epoch': 0.15}
{'loss': 0.0, 'learning_rate': 1.8e-05, 'epoch': 0.3}
{'loss': 0.0, 'learning_rate': 1.7e-05, 'epoch': 0.45}
{'loss': 0.0, 'learning_rate': 1.6000000000000003e-05, 'epoch': 0.6}
{'loss': 0.0, 'learning_rate': 1.5000000000000002e-05, 'epoch': 0.75}
{'loss': 0.0, 'learning_rate': 1.4e-05, 'epoch': 0.9}
@phantommlin hello,你的loss为0问题解决了吗
没有哦
可能和硬件有关: https://github.com/mymusise/ChatGLM-Tuning/issues/19
我用P40训练,batch_size等于1时,loss也是0,请问您解决了吗? {"epoch": 0.0, "learning_rate": 1.9980769230769233e-05, "loss": 0.0, "step": 50}, {"epoch": 0.0, "learning_rate": 1.9961538461538464e-05, "loss": 0.0, "step": 100}, {"epoch": 0.0, "learning_rate": 1.9942307692307695e-05, "loss": 0.0, "step": 150}
更新:batch_size等于2时,step=50时,loss不为0,后续都是0,感觉像是个bug {"epoch":0.0,"learning_rate":1.9980769230769233e-05,"loss":1.6446,"step":50}, {"epoch":0.0,"learning_rate":1.9961538461538464e-05,"loss":0.0,"step":100}, {"epoch":0.01,"learning_rate":1.9942307692307695e-05,"loss":0.0,"step":150}, {"epoch":0.01,"learning_rate":1.9923076923076926e-05,"loss":0.0,"step":200}
我也遇到这个问题了,显卡是v100,目前查到是在 modeling_chatGLM::SelfAttention::forward(): output = self.dense(context_layer) # 这一行
output的结果中 有inf和-inf, self.dense 的类型是<class 'bitsandbytes.nn.modules.Linear8bitLt'>,初步看上去是 in8量化的问题,Linear8bitLt 内的实现太复杂了,还没看明白。真正原因还没还知道,困扰了2天了。。。。

我也遇到这个问题了,显卡是v100,目前查到是在 modeling_chatGLM::SelfAttention::forward(): output = self.dense(context_layer) # 这一行
output的结果中 有inf和-inf, self.dense 的类型是<class 'bitsandbytes.nn.modules.Linear8bitLt'>,初步看上去是 in8量化的问题,Linear8bitLt 内的实现太复杂了,还没看明白。真正原因还没还知道,困扰了2天了。。。。
![]()
这个问题我解决了,解决办法是脚本中启用fp16,加载模型时,load_in_8bit设置成False,即可正常训练,具体原因我也没查出来是为啥
启用fp16, load_in_8bit设置为False, 会出现以下报错: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm)
不启用fp16, load_in_8bit设为True时,正常运行,loss一直为0.
启用fp16, load_in_8bit设置为False, 会出现以下报错: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm)
不启用fp16, load_in_8bit设为True时,正常运行,loss一直为0.
问题已解决,更新了peft就可以work了
@SizhaoXu bro,“不启用fp16, load_in_8bit设为True时,正常运行,loss一直为0.”这个可以吗
启用fp16, load_in_8bit设置为False, 会出现以下报错: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm) 不启用fp16, load_in_8bit设为True时,正常运行,loss一直为0.
问题已解决,更新了peft就可以work了
你好,更新peft到什么版本呢,我已经是v0.2.0了
启用fp16, load_in_8bit设置为False, 会出现以下报错: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm) 不启用fp16, load_in_8bit设为True时,正常运行,loss一直为0.
问题已解决,更新了peft就可以work了
请问您更新的peft是什么版本?
更新peft到最新版本么0.3.0.dev0么?
启用fp16, load_in_8bit设置为False, 会出现以下报错: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
这个错怎么解决
启用fp16, load_in_8bit设置为False, 会出现以下报错: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm) 不启用fp16, load_in_8bit设为True时,正常运行,loss一直为0.
问题已解决,更新了peft就可以work了
现在用的是什么版本的peft?