lora微调后的模型权重要如何在finetune.py中加载,进行二次微调
现有的模型加载方式会报错,找不到config文件,我是否可以直接用automodel代替,如下图所示
现有的模型加载方式会报错,找不到config文件 -> Can you provide the corresponding error log? Thanks.
现有的模型加载方式会报错,找不到config文件 -> 能否提供相应的错误日志?谢谢。
Thanks for your feedback! Do u use the AutoPeftModelForCausalLM class here to load the model?
Thanks for your feedback! Do u use the
AutoPeftModelForCausalLMclass here to load the model?
您好,感谢您的工作!我想请教一下,使用AutoPeftModelForCausalLM加载模型后,参照finetune.py中的lora设置代码继续训练,出现下面报错如何解决?我确认设置了model.tokenizer,似乎没有成功 to_regress_embeds, attention_mask, targets, im_mask = self.interleav_wrap( File "/root/.cache/huggingface/modules/transformers_modules/xcomposer2-4khd/modeling_internlm_xcomposer2.py", line 226, in interleav_wrap part_tokens = self.tokenizer( TypeError: 'NoneType' object is not callable
Thanks for your feedback! Do u use the
AutoPeftModelForCausalLMclass here to load the model?您好,感谢您的工作!我想请教一下,使用AutoPeftModelForCausalLM加载模型后,参照finetune.py中的lora设置代码继续训练,出现下面报错如何解决?我确认设置了model.tokenizer,似乎没有成功 to_regress_embeds, attention_mask, targets, im_mask = self.interleav_wrap( File "/root/.cache/huggingface/modules/transformers_modules/xcomposer2-4khd/modeling_internlm_xcomposer2.py", line 226, in interleav_wrap part_tokens = self.tokenizer( TypeError: 'NoneType' object is not callable
继续训练的代码如下:
Start trainner
trainer = Trainer(
model=model, tokenizer=tokenizer, args=training_args, **data_module)
trainer.train(resume_from_checkpoint=True)
trainer.save_state()
Thanks for your feedback! Do u use the
AutoPeftModelForCausalLMclass here to load the model?您好,感谢您的工作!我想请教一下,使用AutoPeftModelForCausalLM加载模型后,参照finetune.py中的lora设置代码继续训练,出现下面报错如何解决?我确认设置了model.tokenizer,似乎没有成功 to_regress_embeds, attention_mask, targets, im_mask = self.interleav_wrap( File "/root/.cache/huggingface/modules/transformers_modules/xcomposer2-4khd/modeling_internlm_xcomposer2.py", line 226, in interleav_wrap part_tokens = self.tokenizer( TypeError: 'NoneType' object is not callable
I get the same error. I just load it and want to inference , not train, not finetune