LLaMA-Factory icon indicating copy to clipboard operation
LLaMA-Factory copied to clipboard

PPO : IndexError: index -1 is out of bounds for dimension 0 with size 0

Open yuanllong opened this issue 5 months ago • 1 comments

Reminder

  • [x] I have read the above rules and searched the existing issues.

System Info

Training starts:Seq2SeqTrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, batch_eval_metrics=False, bf16=True, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=False, ddp_timeout=180000000, debug=[], deepspeed=examples/deepspeed/ds_z2_config.json, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=True, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_steps=None, eval_strategy=no, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, generation_config=None, generation_max_length=None, generation_num_beams=None, gradient_accumulation_steps=1, gradient_checkpointing=False, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=<HUB_TOKEN>, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=/local_data/data/Qwen2.5-7B-Instruct_generator_ppo_data_window_v3_test_v8_15/runs/Oct27_07-10-59_node1, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=cosine, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=5.0, optim=adamw_torch, optim_args=None, optim_target_modules=None, output_dir=/local_data/data/Qwen2.5-7B-Instruct_generator_ppo_data_window_v3_test_v8_15, overwrite_output_dir=True, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=2, predict_with_generate=False, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=<PUSH_TO_HUB_TOKEN>, ray_scope=last, remove_unused_columns=True, report_to=[], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=/local_data/data/Qwen2.5-7B-Instruct_generator_ppo_data_window_v3_test_v8_15, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=700, save_strategy=steps, save_total_limit=None, seed=42, skip_memory_metrics=True, sortish_sampler=False, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger_kernel=False, use_mps_device=False, warmup_ratio=0.2, warmup_steps=0, weight_decay=0.0, )

Reproduction

[rank1]: Traceback (most recent call last):
[rank1]:   File "/home/LLaMA-Factory/src/llamafactory/launcher.py", line 25, in <module>
[rank1]:     launch()
[rank1]:   File "/home/LLaMA-Factory/src/llamafactory/launcher.py", line 19, in launch
[rank1]:     run_exp()
[rank1]:   File "/home/LLaMA-Factory/src/llamafactory/train/tuner.py", line 54, in run_exp
[rank1]:     run_ppo(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
[rank1]:   File "/home/LLaMA-Factory/src/llamafactory/train/ppo/workflow.py", line 73, in run_ppo
[rank1]:     ppo_trainer.ppo_train(resume_from_checkpoint=training_args.resume_from_checkpoint)
[rank1]:   File "/home/LLaMA-Factory/src/llamafactory/train/ppo/trainer_8.py", line 2436, in ppo_train
[rank1]:     stats = self.step(queries, responses, rewards)
[rank1]:   File "/home/.conda/envs/mmoarag/lib/python3.10/contextlib.py", line 79, in inner
[rank1]:     return func(*args, **kwds)
[rank1]:   File "/home/.conda/envs/mmoarag/lib/python3.10/site-packages/trl/trainer/ppo_trainer.py", line 769, in step
[rank1]:     rewards, non_score_reward, kls = self.compute_rewards(scores, all_logprobs, ref_logprobs, masks)
[rank1]:   File "/home/.conda/envs/mmoarag/lib/python3.10/site-packages/trl/trainer/ppo_trainer.py", line 1134, in compute_rewards
[rank1]:     last_non_masked_index = mask.nonzero()[-1]
[rank1]: IndexError: index -1 is out of bounds for dimension 0 with size 0

Others

同样的代码和数据,跑了几十遍都没有问题,然后调整trl的PPOco参数之后,就导致上面的报错

yuanllong avatar Oct 27 '25 07:10 yuanllong

@hiyouga 哈喽作者大人,可以关注一下这个问题吗?

yuanllong avatar Oct 29 '25 00:10 yuanllong