verl icon indicating copy to clipboard operation
verl copied to clipboard

Low GPU Utilization, help on training parameters. 训练时显卡利用率很低,如何提升训练效率?

Open lezhang7 opened this issue 7 months ago • 2 comments
trafficstars

I can train the model. However, I find the GPU utilization extremely low with following training script. Considering long prompt length and response lenght, I set the batch size to be small. As shown in the screenshot, for more than half time the GPU is not working. I'm wondering how to better ultilize GPUs.

如图所示,训练时大部分时间显卡基本空置,怀疑是训练参数设置有问题,但是自己调了一下也没有很好的work,请问这种情况下应该如何调整训练参数来最大化显卡利用率呢?

Image
    data.max_prompt_length=3072 \
    data.max_response_length=2048 \
    data.filter_overlong_prompts=True \
    data.truncation='error' \
    actor_rollout_ref.model.path=Qwen/Qwen2.5-7B-Instruct \
    actor_rollout_ref.actor.optim.lr=1e-6 \
    actor_rollout_ref.model.use_remove_padding=True \
    actor_rollout_ref.actor.ppo_mini_batch_size=16 \
    actor_rollout_ref.actor.ppo_micro_batch_size_per_gpu=8 \
    actor_rollout_ref.actor.use_kl_loss=True \
    actor_rollout_ref.actor.kl_loss_coef=0.001 \
    actor_rollout_ref.actor.kl_loss_type=low_var_kl \
    actor_rollout_ref.actor.entropy_coeff=0 \
    actor_rollout_ref.model.enable_gradient_checkpointing=True \
    actor_rollout_ref.actor.fsdp_config.param_offload=False \
    actor_rollout_ref.actor.fsdp_config.optimizer_offload=False \
    actor_rollout_ref.rollout.log_prob_micro_batch_size_per_gpu=16 \
    actor_rollout_ref.rollout.tensor_model_parallel_size=2 \
    actor_rollout_ref.rollout.name=vllm \
    actor_rollout_ref.rollout.gpu_memory_utilization=0.6 \
    actor_rollout_ref.rollout.n=5 \
    actor_rollout_ref.ref.log_prob_micro_batch_size_per_gpu=16 \
    actor_rollout_ref.ref.fsdp_config.param_offload=True \
    algorithm.use_kl_in_reward=False \
    trainer.critic_warmup=0 \
    trainer.val_before_train=false \
    trainer.logger=['console','wandb'] \
    trainer.project_name='qwen2' \
    trainer.experiment_name='qwen2_7b' \
    trainer.n_gpus_per_node=8 \
    trainer.nnodes=1 \
    trainer.save_freq=150 \
    trainer.test_freq=-1 \
    trainer.total_epochs=15 $@```

lezhang7 avatar Apr 21 '25 17:04 lezhang7

你的奖励函数是用的源代码吗还是自己重写的

Okiii-lh avatar Apr 22 '25 07:04 Okiii-lh

我重写过奖励函数,似乎也出现了这种情况

522315428 avatar May 28 '25 02:05 522315428

楼主这个问题解决了吗

ZaymeShaw avatar Jul 14 '25 10:07 ZaymeShaw

这应该不是一个bug, 最后用asycio重写了奖励函数的实现就快了

lezhang7 avatar Jul 14 '25 13:07 lezhang7

这应该不是一个bug, 最后用asycio重写了奖励函数的实现就快了

hello, Can you provide how to modify it?

huang3eng avatar Sep 09 '25 08:09 huang3eng