dlrover icon indicating copy to clipboard operation
dlrover copied to clipboard

Incomplete save of ckpt files

Open husky23333 opened this issue 1 year ago • 4 comments

I am using dlrover on Megatron-DeepSpeed,and my machine has 4 GPUs. The hybrid parallel settings are as follows, TP:[0,1],[2,3] DP:[0,2],[1,3] At the same time, I also configured DeepSpeed with Zero 1. The saving status of ckpt files are as follows, dlrover-deepspeed

Normally, ckpt files include these, image

layer_*-model_states.pt and zero_pp_rank_1_*optim_states.pt are missing

husky23333 avatar May 21 '24 01:05 husky23333

同问,当设置 tp,pp ,deepsped+zero 等并行策略,遇到网络问题,GPU故障, 节点异常,能结合弹性伸缩恢复训练吗?

wwj-2017-1117 avatar May 26 '24 03:05 wwj-2017-1117

同问,当设置 tp,pp ,deepsped+zero 等并行策略,遇到网络问题,GPU故障, 节点异常,能结合弹性伸缩恢复

DLRover 的容错基于 torchelastic 的重启子进程的方案,理论上只要有 checkpoint 就可以恢复。针对具体的并行方案,能否恢复只要区别是故障后重启子进程的数量是否和故障之前的子进程数量一致,即 global world size 是否会有变化。

  • 如果发现了故障,比如网络导致的 NCCL timeout 等,只要可用的节点数量没有变化,任意的并行方式都可以恢复。也就是说和手动重启训练没有区别。
  • 如果是节点故障了,但是集群中还有备份可用节点,DLRover 的 ElasticJob 可以重新拉起一个新的 Pod 替换故障机的 Pod,那就和上面的情况没有区别。
  • 如果是节点故障了,但是集群中没有备份可用节点了,DLRover 的 ElasticJob 就必须缩容了,即 global world size 变小。这是就要考虑框架是否支持缩容了。比如 FSDP + DistributedCheckpoint 方案是可以的。Megatron-LM 的 3D 并行就需要保证剩下的节点数量是 PP size (tp size= local world size)的整数倍,且不能使用 distributed checkpoint 才行。

workingloong avatar May 28 '24 05:05 workingloong

@workingloong 在代码中,有处理 遇到GPU掉卡或者ECC错误时,重新拉起一个pod的流程 的逻辑吗 ? 好像没有看到。manager.HandleFaultPods 这个逻辑也不像哦

wwj-2017-1117 avatar Jun 03 '24 10:06 wwj-2017-1117

This issue has been automatically marked as stale because it has not had recent activity.

github-actions[bot] avatar Oct 13 '24 01:10 github-actions[bot]

This issue is being automatically closed due to inactivity.

github-actions[bot] avatar Oct 20 '24 01:10 github-actions[bot]