self-llm
self-llm copied to clipboard
05-Qwen2-7B-Instruct Lora 微调报错
你好可以帮忙看下这这个问题吗 在加载训练模型的时候,执行代码
trainer = Trainer(
model=model,
args=args,
train_dataset=tokenized_id,
data_collator=DataCollatorForSeq2Seq(tokenizer=tokenizer, padding=True)
)
报错信息如下:
"name": "TypeError",
"message": "__init__() got an unexpected keyword argument 'use_seedable_sampler'",
"stack": "---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [21], in <cell line: 1>()
----> 1 trainer = Trainer(
2 model=model,
3 args=args,
4 train_dataset=tokenized_id,
5 data_collator=DataCollatorForSeq2Seq(tokenizer=tokenizer, padding=True)
6 )
File ~/miniconda3/lib/python3.8/site-packages/transformers/trainer.py:402, in Trainer.__init__(self, model, args, data_collator, train_dataset, eval_dataset, tokenizer, model_init, compute_metrics, callbacks, optimizers, preprocess_logits_for_metrics)
399 self.deepspeed = None
400 self.is_in_train = False
--> 402 self.create_accelerator_and_postprocess()
404 # memory metrics - must set up as early as possible
405 self._memory_eracker = TrainerMemoryTracker(self.args.skip_memory_metrics)
File ~/miniconda3/lib/python3.8/site-packages/transformers/trainer.py:4535, in Trainer.create_accelerator_and_postprocess(self)
4532 args.update(accelerator_config)
4534 # create accelerator object
-> 4535 self.accelerator = Accelerator(**args)
4536 # some Trainer classes need to use `gather` instead of `gather_for_metrics`, thus we store a flag
4537 self.gather_function = self.accelerator.gather_for_metrics
TypeError: __init__() got an unexpected keyword argument 'use_seedable_sampler'"
}
主要配置如下:
python: 3.8.10
torch: 2.1.2
transformers: 4.41.2
accelerate: 0.24.1
datasets: 2.10.1
peft: 0.4.0
Driver Version: 535.104.05
CUDA Version: 12.2
此外我将peft: 0.4.0更新到peft: 0.10.0 其他毅版本不变的情况下依然会报这个错,这是为什么呀? 还请大佬指教~