[BUG/Help] <title>验证时报错batch_size不匹配
Is there an existing issue for this?
- [X] I have searched the existing issues
Current Behavior
Traceback (most recent call last):
File "/home/cong009/.pycharm_helpers/pydev/pydevd.py", line 1415, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "/home/cong009/.pycharm_helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/mnt/laiqinghan/project/1_FUTU/llmModels/accelerate_chatglm6b/singleGPU_semantic_finetuning.py", line 232, in
有大佬知道啥原因吗
Expected Behavior
No response
Steps To Reproduce
a
Environment
- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA Support (`python -c "import torch; print(torch.cuda.is_available())"`) :
Anything else?
No response
我做验证的时候也有这个问题,请问解决了吗
https://github.com/THUDM/ChatGLM-6B/issues/620#issuecomment-1537448881
- eval_dataset.map 时候: 将 preprocess_function_eval 更改为 preprocess_function_train
- do_eval 时候 :改为 metrics = trainer.evaluate(metric_key_prefix="eval")
- 启动脚本时候,确保 predict_with_generate 为 False 就可以了