InternLM-XComposer icon indicating copy to clipboard operation
InternLM-XComposer copied to clipboard

lora微调后的模型权重要如何在finetune.py中加载,进行二次微调

Open dle666 opened this issue 1 year ago • 6 comments

现有的模型加载方式会报错,找不到config文件,我是否可以直接用automodel代替,如下图所示 image

dle666 avatar Apr 19 '24 03:04 dle666

现有的模型加载方式会报错,找不到config文件 -> Can you provide the corresponding error log? Thanks.

yuhangzang avatar Apr 22 '24 11:04 yuhangzang

现有的模型加载方式会报错,找不到config文件 -> 能否提供相应的错误日志?谢谢。

image

dle666 avatar Apr 22 '24 11:04 dle666

Thanks for your feedback! Do u use the AutoPeftModelForCausalLM class here to load the model?

yuhangzang avatar Apr 23 '24 04:04 yuhangzang

Thanks for your feedback! Do u use the AutoPeftModelForCausalLM class here to load the model?

您好,感谢您的工作!我想请教一下,使用AutoPeftModelForCausalLM加载模型后,参照finetune.py中的lora设置代码继续训练,出现下面报错如何解决?我确认设置了model.tokenizer,似乎没有成功 to_regress_embeds, attention_mask, targets, im_mask = self.interleav_wrap( File "/root/.cache/huggingface/modules/transformers_modules/xcomposer2-4khd/modeling_internlm_xcomposer2.py", line 226, in interleav_wrap part_tokens = self.tokenizer( TypeError: 'NoneType' object is not callable

Starfulllll avatar May 28 '24 18:05 Starfulllll

Thanks for your feedback! Do u use the AutoPeftModelForCausalLM class here to load the model?

您好,感谢您的工作!我想请教一下,使用AutoPeftModelForCausalLM加载模型后,参照finetune.py中的lora设置代码继续训练,出现下面报错如何解决?我确认设置了model.tokenizer,似乎没有成功 to_regress_embeds, attention_mask, targets, im_mask = self.interleav_wrap( File "/root/.cache/huggingface/modules/transformers_modules/xcomposer2-4khd/modeling_internlm_xcomposer2.py", line 226, in interleav_wrap part_tokens = self.tokenizer( TypeError: 'NoneType' object is not callable

继续训练的代码如下:

Start trainner

trainer = Trainer(
    model=model, tokenizer=tokenizer, args=training_args, **data_module)

trainer.train(resume_from_checkpoint=True)
trainer.save_state()

WeiminLee avatar Jun 06 '24 00:06 WeiminLee

Thanks for your feedback! Do u use the AutoPeftModelForCausalLM class here to load the model?

您好,感谢您的工作!我想请教一下,使用AutoPeftModelForCausalLM加载模型后,参照finetune.py中的lora设置代码继续训练,出现下面报错如何解决?我确认设置了model.tokenizer,似乎没有成功 to_regress_embeds, attention_mask, targets, im_mask = self.interleav_wrap( File "/root/.cache/huggingface/modules/transformers_modules/xcomposer2-4khd/modeling_internlm_xcomposer2.py", line 226, in interleav_wrap part_tokens = self.tokenizer( TypeError: 'NoneType' object is not callable

I get the same error. I just load it and want to inference , not train, not finetune

LinaZhangCoding avatar Jul 05 '24 00:07 LinaZhangCoding