[BUG] <title>TypeError: CPMTrainer.training_step() takes 3 positional arguments but 4 were given
是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?
- [X] 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions
该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?
- [X] 我已经搜索过FAQ | I have searched FAQ
当前行为 | Current Behavior
[rank0]: Traceback (most recent call last): [rank0]: File "/opt/MiniCPM-V-main/finetune/finetune.py", line 299, in [rank0]: train() [rank0]: File "/opt/MiniCPM-V-main/finetune/finetune.py", line 289, in train [rank0]: trainer.train() [rank0]: File "/root/miniconda3/envs/transformers/lib/python3.10/site-packages/transformers/trainer.py", line 2164, in train [rank0]: return inner_training_loop( [rank0]: File "/root/miniconda3/envs/transformers/lib/python3.10/site-packages/transformers/trainer.py", line 2524, in _inner_training_loop [rank0]: tr_loss_step = self.training_step(model, inputs, num_items_in_batch) [rank0]: TypeError: CPMTrainer.training_step() takes 3 positional arguments but 4 were given
I got this error when I tried to finetune the model, anyone knows how to solve it?
期望行为 | Expected Behavior
No response
复现方法 | Steps To Reproduce
No response
运行环境 | Environment
- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):
备注 | Anything else?
No response
Same problem, recently appeared
same, anybody solved?
Same problem, recently appeared
It is because of transformers version: Transformers: transformers==4.47.1: vllms (used for inference) transformers==4.40.0: trainable (used for training)
My current solution is swapping between these two. Looking forward to more feasible approaches...
Same problem
Envs little bit messy but this works for me
Training:
pip uninstall vllm-flash-attn
pip uninstall xformers
pip uninstall openai
pip install -r ../requirements.txt
Testing:
pip install vllm==0.5.4
Flipping between these two works well for me