onenotell
onenotell
借楼问一下, if finetuning_args.stage in ["rm", "ppo"] and finetuning_args.finetuning_type in ["full", "freeze"]: can_resume_from_checkpoint = False if training_args.resume_from_checkpoint is not None: logger.warning("Cannot resume from checkpoint in current stage.") training_args.resume_from_checkpoint = None else:...
> 参考:[#559](https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/559#issuecomment-1585948697) 修改transformers源码 site-packages/transformers/integrations/deepspeed.py  可解决问题 python3.10 transformers 4.28.2 可以直接用shell修改打包,测试可用 sed -i 's/self.model_wrapped, resume_from_checkpoint, load_module_strict=not _is_peft_model(self.model)/self.model_wrapped, resume_from_checkpoint, load_module_strict=False/g' /usr/local/lib/python3.10/dist-packages/transformers/trainer.py