MiyazonoKaori
MiyazonoKaori
你好,同样遇到了这个问题,请问有什么解决方法吗?
改了下tokenizer文件解决了。。。 [moss_tokenizer.zip](https://github.com/OpenLMLab/MOSS/files/11704405/moss_tokenizer.zip)
修改保存的地方: self.accelerator.wait_for_everyone() unwrapped_model = self.accelerator.unwrap_model(self.model) unwrapped_model.save_pretrained( save_dir, is_main_process=self.accelerator.is_main_process, save_function=self.accelerator.save, state_dict=self.accelerator.get_state_dict(self.model), ) 或者修改读取的代码: from accelerate import init_empty_weights, load_checkpoint_and_dispatch, infer_auto_device_map from accelerate.utils import get_balanced_memory config = MossConfig.from_pretrained(model_path) tokenizer = MossTokenizer.from_pretrained(model_path) with init_empty_weights():...
[finetune_moss - 副本.txt](https://github.com/OpenLMLab/MOSS/files/11766790/finetune_moss.-.txt)
 测试用的,忘了删
这里你直接改成原有finetune_moss.py 的代码就行了,不要那个for,直接append。用点子智慧咂
same error, python3.10, downgrade langflow and langchain not work 