LinaZhangCoding
LinaZhangCoding
> > Thanks for your feedback! Do u use the `AutoPeftModelForCausalLM` class [here](https://github.com/InternLM/InternLM-XComposer/blob/main/finetune/README.md#lora-finetuning) to load the model? > > 您好,感谢您的工作!我想请教一下,使用AutoPeftModelForCausalLM加载模型后,参照finetune.py中的lora设置代码继续训练,出现下面报错如何解决?我确认设置了model.tokenizer,似乎没有成功 _to_regress_embeds, attention_mask, targets, im_mask = self.interleav_wrap( File "/root/.cache/huggingface/modules/transformers_modules/xcomposer2-4khd/modeling_internlm_xcomposer2.py", line 226,...
where is the “gpt2-base” model?Is it init-gpt2-120M? download several “ init-gpt2-120M MiniLLM-gpt2-120M SFT-gpt2-120M” but no “gpt2-base” model.
3.2 Change Model Parallel Size You can increase/decrease the tensor parallel sizes with python3 tools/convert_mp.py \ --input_path results/llama/train/minillm/7B-init-13B-sft \ --source_mp_size 1 \ --target_mp_size 4 \ --model_type llama # choose from...