verl
verl copied to clipboard
Low GPU Utilization, help on training parameters. 训练时显卡利用率很低,如何提升训练效率?
I can train the model. However, I find the GPU utilization extremely low with following training script. Considering long prompt length and response lenght, I set the batch size to be small. As shown in the screenshot, for more than half time the GPU is not working. I'm wondering how to better ultilize GPUs.
如图所示,训练时大部分时间显卡基本空置,怀疑是训练参数设置有问题,但是自己调了一下也没有很好的work,请问这种情况下应该如何调整训练参数来最大化显卡利用率呢?
data.max_prompt_length=3072 \
data.max_response_length=2048 \
data.filter_overlong_prompts=True \
data.truncation='error' \
actor_rollout_ref.model.path=Qwen/Qwen2.5-7B-Instruct \
actor_rollout_ref.actor.optim.lr=1e-6 \
actor_rollout_ref.model.use_remove_padding=True \
actor_rollout_ref.actor.ppo_mini_batch_size=16 \
actor_rollout_ref.actor.ppo_micro_batch_size_per_gpu=8 \
actor_rollout_ref.actor.use_kl_loss=True \
actor_rollout_ref.actor.kl_loss_coef=0.001 \
actor_rollout_ref.actor.kl_loss_type=low_var_kl \
actor_rollout_ref.actor.entropy_coeff=0 \
actor_rollout_ref.model.enable_gradient_checkpointing=True \
actor_rollout_ref.actor.fsdp_config.param_offload=False \
actor_rollout_ref.actor.fsdp_config.optimizer_offload=False \
actor_rollout_ref.rollout.log_prob_micro_batch_size_per_gpu=16 \
actor_rollout_ref.rollout.tensor_model_parallel_size=2 \
actor_rollout_ref.rollout.name=vllm \
actor_rollout_ref.rollout.gpu_memory_utilization=0.6 \
actor_rollout_ref.rollout.n=5 \
actor_rollout_ref.ref.log_prob_micro_batch_size_per_gpu=16 \
actor_rollout_ref.ref.fsdp_config.param_offload=True \
algorithm.use_kl_in_reward=False \
trainer.critic_warmup=0 \
trainer.val_before_train=false \
trainer.logger=['console','wandb'] \
trainer.project_name='qwen2' \
trainer.experiment_name='qwen2_7b' \
trainer.n_gpus_per_node=8 \
trainer.nnodes=1 \
trainer.save_freq=150 \
trainer.test_freq=-1 \
trainer.total_epochs=15 $@```
你的奖励函数是用的源代码吗还是自己重写的
我重写过奖励函数,似乎也出现了这种情况
楼主这个问题解决了吗
这应该不是一个bug, 最后用asycio重写了奖励函数的实现就快了
这应该不是一个bug, 最后用asycio重写了奖励函数的实现就快了
hello, Can you provide how to modify it?