LLM-Tuning icon indicating copy to clipboard operation
LLM-Tuning copied to clipboard

chatglt-6b2 lora微调使用int4精度报错

Open hehuomu opened this issue 1 year ago • 0 comments

请问怎么改动代码,使用低精度模型,显卡不支持 Log如下 '\nlen(dataset)=1\n' 'loading init model...' Failed to load cpm_kernels:No module named 'cpm_kernels' Traceback (most recent call last): File "LLM-Tuning/chatglm2_lora_tuning.py", line 172, in main() File "LLM-Tuning/chatglm2_lora_tuning.py", line 98, in main model = AutoModel.from_pretrained( File "anaconda3/envs/py39/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 462, in from_pretrained return model_class.from_pretrained( File "anaconda3/envs/py39/lib/python3.9/site-packages/transformers/modeling_utils.py", line 2611, in from_pretrained model = cls(config, *model_args, **model_kwargs) File "cache/huggingface/modules/transformers_modules/THUDM/chatglm2-6b-int4/382cc704867dc2b78368576166799ace0f89d9ef/modeling_chatglm.py", line 859, in init self.quantize(self.config.quantization_bit, empty_init=True) File ".cache/huggingface/modules/transformers_modules/THUDM/chatglm2-6b-int4/382cc704867dc2b78368576166799ace0f89d9ef/modeling_chatglm.py", line 1191, in quantize self.transformer.encoder = quantize(self.transformer.encoder, bits, empty_init=empty_init, device=device, File ".cache/huggingface/modules/transformers_modules/THUDM/chatglm2-6b-int4/382cc704867dc2b78368576166799ace0f89d9ef/quantization.py", line 511, in quantize layer.self_attention.query_key_value = QuantizedLinear( File ".cache/huggingface/modules/transformers_modules/THUDM/chatglm2-6b-int4/382cc704867dc2b78368576166799ace0f89d9ef/quantization.py", line 494, in init self.weight = Parameter(self.weight.to(device), requires_grad=False) File "anaconda3/envs/py39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1632, in setattr self.register_parameter(name, value) File "anaconda3/envs/py39/lib/python3.9/site-packages/accelerate/big_modeling.py", line 104, in register_empty_parameter old_register_parameter(module, name, param) File "anaconda3/envs/py39/lib/python3.9/site-packages/accelerate/big_modeling.py", line 108, in register_empty_parameter module._parameters[name] = param_cls(module._parameters[name].to(device), **kwargs) File "anaconda3/envs/py39/lib/python3.9/site-packages/torch/nn/parameter.py", line 36, in new return torch.Tensor._make_subclass(cls, data, requires_grad) RuntimeError: Only Tensors of floating point and complex dtype can require gradients

hehuomu avatar Aug 22 '23 02:08 hehuomu