MiniCPM-V icon indicating copy to clipboard operation
MiniCPM-V copied to clipboard

训练集的loss正常下降,但是每次在验证集上测试就会出现nan

Open zysNLP opened this issue 1 year ago • 4 comments

如题,查了相关issue,没有出现这种问题。数据6万多,我正常设置 --save_steps=1000和 --eval_steps=1000 的时候,在整个数据集上跑完一个epoch,准确率是有的45%,只是在训练过程中每当跑到验证集这里,不管验证数据是多少条,总会显示NaN or Inf found in input tensor.,然后**{'eval_loss': nan**,,如下:

尽管有提到多种原因,但因为还是想参考看一下在验证集上的loss作为参考以防止过拟合等等,有人知道吗

'loss': 2.7202, 'grad_norm': 0.0, 'learning_rate': 0, 'epoch': 0.0}
{'loss': 2.2612, 'grad_norm': 5.397487163543701, 'learning_rate': 0.0, 'epoch': 0.0} {'loss': 2.4777, 'grad_norm': 6.154934883117676, 'learning_rate': 1.5051499783199056e-08, 'epoch': 0.0} {'loss': 2.4688, 'grad_norm': 5.465117454528809, 'learning_rate': 2.3856062735983118e-08, 'epoch': 0.0} {'loss': 2.5174, 'grad_norm': 6.046875, 'learning_rate': 3.010299956639811e-08, 'epoch': 0.0} {'loss': 2.5364, 'grad_norm': 5.665204048156738, 'learning_rate': 3.494850021680093e-08, 'epoch': 0.0} {'loss': 2.0698, 'grad_norm': 5.009426593780518, 'learning_rate': 3.8907562519182174e-08, 'epoch': 0.0} {'loss': 1.9787, 'grad_norm': 4.761078834533691, 'learning_rate': 4.225490200071283e-08, 'epoch': 0.0} {'loss': 2.5938, 'grad_norm': 6.457172393798828, 'learning_rate': 4.5154499349597164e-08, 'epoch': 0.0} 0%| | 10/10000 [00:36<9:49:02, 3.54s/it]

NaN or Inf found in input tensor. {'eval_loss': nan, 'eval_runtime': 4.1145, 'eval_samples_per_second': 4.861, 'eval_steps_per_second': 1.215, 'epoch': 0.0}

{'loss': 2.4066, 'grad_norm': 4.820044040679932, 'learning_rate': 4.7712125471966236e-08, 'epoch': 0.0} {'loss': 2.4178, 'grad_norm': 6.053901672363281, 'learning_rate': 4.999999999999999e-08, 'epoch': 0.0} {'loss': 2.3392, 'grad_norm': 6.651149749755859, 'learning_rate': 5.206963425791124e-08, 'epoch': 0.0} {'loss': 2.585, 'grad_norm': 5.352585792541504, 'learning_rate': 5.395906230238123e-08, 'epoch': 0.0} {'loss': 2.5957, 'grad_norm': 7.176639080047607, 'learning_rate': 5.5697167615341824e-08, 'epoch': 0.0} {'loss': 2.3703, 'grad_norm': 5.395962715148926, 'learning_rate': 5.730640178391188e-08, 'epoch': 0.0} {'loss': 2.6114, 'grad_norm': 5.684944152832031, 'learning_rate': 5.8804562952784053e-08, 'epoch': 0.0} {'loss': 2.4315, 'grad_norm': 5.432509422302246, 'learning_rate': 6.020599913279622e-08, 'epoch': 0.0} {'loss': 2.31, 'grad_norm': 5.274387359619141, 'learning_rate': 6.152244606891369e-08, 'epoch': 0.0} {'loss': 2.6989, 'grad_norm': 5.854457855224609, 'learning_rate': 6.276362525516528e-08, 'epoch': 0.0} 0%| | 20/10000 [01:40<11:13:45, 4.05s/it]

NaN or Inf found in input tensor.

{'eval_loss': nan, 'eval_runtime': 3.6883, 'eval_samples_per_second': 5.423, 'eval_steps_per_second': 1.356, 'epoch': 0.0}

zysNLP avatar Aug 02 '24 12:08 zysNLP

你的验证集和训练集是相同的精度么,比如fp16或者都是bf16

LDLINGLINGLING avatar Aug 05 '24 03:08 LDLINGLINGLING

@LDLINGLINGLING 是的,微调指令是这个

CUDA_VISIBLE_DEVICES=1,3,4,5 torchrun $DISTRIBUTED_ARGS finetune.py
--model_name_or_path $MODEL
--llm_type $LLM_TYPE
--data_path $DATA
--eval_data_path $EVAL_DATA
--remove_unused_columns false
--label_names "labels"
--prediction_loss_only false
--bf16 false
--bf16_full_eval false
--fp16 true
--fp16_full_eval true
--do_train
--do_eval
--tune_vision true
--tune_llm false
--use_lora true
--lora_target_modules "llm..*layers.\d+.self_attn.(q_proj|k_proj|v_proj|o_proj)"
--model_max_length 1024
--max_slice_nums 9
--max_steps 15000
--eval_steps 1000
--output_dir output_v2/output_minicpmv2_lora
--logging_dir output_v2/output_minicpmv2_lora
--logging_strategy "steps"
--per_device_train_batch_size 2
--per_device_eval_batch_size 1
--gradient_accumulation_steps 1
--evaluation_strategy "steps"
--save_strategy "steps"
--save_steps 1000
--save_total_limit 10
--learning_rate 1e-6
--weight_decay 0.1
--adam_beta2 0.95
--warmup_ratio 0.01
--lr_scheduler_type "cosine"
--logging_steps 1
--gradient_checkpointing true
--deepspeed ds_config_zero2.json
--report_to "tensorboard" # wandb

zysNLP avatar Aug 05 '24 03:08 zysNLP

更换验证集是否可以

LDLINGLINGLING avatar Aug 05 '24 04:08 LDLINGLINGLING

@LDLINGLINGLING 换过验证集,也不行,我的输出是一些键值对,里面有引号。在实际推理的时候结果是有的,虽然准确率一般但这是数据质量的问题。不管验证集取多少数据,都是nan。

zysNLP avatar Aug 05 '24 04:08 zysNLP