pytorch icon indicating copy to clipboard operation
pytorch copied to clipboard

DDP deadlock ProcessGroupNCCL's watchdog got stuck

Open bhack opened this issue 1 year ago • 11 comments

🐛 Describe the bug

The process is working correctly with DDP world size 1 but then with world size > 1 is going to hang with GPU 0 at 0% and GPU 1 fixed to max occupancy. I've replicated this both with A100 and H100, with and without torch.compile.

I got this message on nightly:

PG ID 0 PG GUID 0(default_pg) Rank 1] ProcessGroupNCCL's watchdog got stuck for 480 seconds without making progress in monitoring enqueued collectives. This typically indicates a NCCL/CUDA API (e.g., CudaEventDestroy) hang blocking the watchdog, and could be triggered by another thread holding the GIL inside a CUDA api (for example, CudaEventDestroy), or other deadlock-prone behaviors.If you suspect the watchdog is not actually stuck and a longer timeout would help, you can either increase the timeout (TORCH_NCCL_HEARTBEAT_TIMEOUT_SEC) to a larger value or disable the heartbeat monitor (TORCH_NCCL_ENABLE_MONITORING=0).If either of aforementioned helps, feel free to file an issue to PyTorch about the short timeout or false positive abort; otherwise, please attempt to debug the hang.

Versions

PyTorch version: 2.6.0.dev20241001+cu124 Is debug build: False CUDA used to build PyTorch: 12.4 ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.5 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect CMake version: version 3.30.4 Libc version: glibc-2.35

Python version: 3.11.10 | packaged by conda-forge | (main, Sep 30 2024, 18:08:57) [GCC 13.3.0] (64-bit runtime) Python platform: Linux-6.1.100+-x86_64-with-glibc2.35 Is CUDA available: True CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB Nvidia driver version: 550.90.07 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True

CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 46 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 12 On-line CPU(s) list: 0-11 Vendor ID: GenuineIntel Model name: Intel(R) Xeon(R) CPU @ 2.20GHz CPU family: 6 Model: 85 Thread(s) per core: 2 Core(s) per socket: 6 Socket(s): 1 Stepping: 7 BogoMIPS: 4400.41 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat avx512_vnni md_clear arch_capabilities Hypervisor vendor: KVM Virtualization type: full L1d cache: 192 KiB (6 instances) L1i cache: 192 KiB (6 instances) L2 cache: 6 MiB (6 instances) L3 cache: 38.5 MiB (1 instance) NUMA node(s): 1 NUMA node0 CPU(s): 0-11 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown Vulnerability Reg file data sampling: Not affected Vulnerability Retbleed: Mitigation; Enhanced IBRS Vulnerability Spec rstack overflow: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown

Versions of relevant libraries: [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.26.4 [pip3] optree==0.12.1 [pip3] pytorch-triton==3.1.0+cf34004b8a [pip3] torch==2.6.0.dev20241001+cu124 [pip3] torchaudio==2.5.0.dev20241001+cu124 [pip3] torchelastic==0.2.2 [pip3] torchvision==0.20.0.dev20241001+cu124 [conda] numpy 1.26.4 pypi_0 pypi [conda] optree 0.12.1 pypi_0 pypi [conda] pytorch-triton 3.1.0+cf34004b8a pypi_0 pypi [conda] torch 2.6.0.dev20241001+cu124 pypi_0 pypi [conda] torchaudio 2.5.0.dev20241001+cu124 pypi_0 pypi [conda] torchelastic 0.2.2 pypi_0 pypi [conda] torchvision 0.20.0.dev20241001+cu124 pypi_0 pypi

cc @XilunWu @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o

bhack avatar Oct 03 '24 14:10 bhack

@bhack Do you have a simple repro script for this?

yf225 avatar Oct 07 '24 20:10 yf225

I cannot share the whole repo. I try do describe the case if we could inject some better user notification/failure.

The problem is mainly realated to a ddp wrapped model where the forward was something like this:

    def forward(self, input):
        if self.training:
           ,,,,,
            return losses
        else:
            return outputs

So to access to the outputs for logging in e.g. TB/WB etc.. you need to put the model on eval. This is going to create this not clear deadlock in DDP.

So it we be nice if we could add some more direct notification about this failure case.

bhack avatar Oct 08 '24 21:10 bhack

hello,I'm facing the same issue. Has this problem been solved?@bhack

ZhuJiwei111 avatar Nov 07 '24 04:11 ZhuJiwei111

There is not much we can do without a repro script. Can someone help provide a repro we can run to debug?

Another thing that could help is dumping c++ and python stack traces for all threads at the time of the hang.

wconstab avatar Nov 07 '24 14:11 wconstab

@ZhuJiwei111 Do you have a minimal ddp code to reproduce it as mine was quite large to isolate in a standalone version.

bhack avatar Nov 07 '24 15:11 bhack

sorry, I don't have such code. I also encountered this error during training a large model, and it trained normally for 12 hours before this error occurred. This is not a random phenomenon, as I tried to run it again and it was terminated due to this error. However, I have successfully trained and completed it in the past.

ZhuJiwei111 avatar Nov 08 '24 00:11 ZhuJiwei111

There is not much we can do without a repro script. Can someone help provide a repro we can run to debug?

Another thing that could help is dumping c++ and python stack traces for all threads at the time of the hang.

hello, this repo: https://github.com/lyuwenyu/RT-DETR train with this config file https://github.com/lyuwenyu/RT-DETR/blob/main/rtdetrv2_pytorch/configs/rtdetrv2/rtdetrv2_r18vd_sp1_120e_coco.yml will get the same error.

kenh1991 avatar Nov 08 '24 14:11 kenh1991

Thanks @kenh1991! Can we repro this on an 8 gpu machine or does it need more GPUs to run? What is the run command to run it?

wconstab avatar Nov 11 '24 14:11 wconstab

I got the same error with 16 node h100. But the error happens randomly

Jason3900 avatar Nov 13 '24 00:11 Jason3900

I cannot share the whole repo. I try do describe the case if we could inject some better user notification/failure.

The problem is mainly realated to a ddp wrapped model where the forward was something like this:

    def forward(self, input):
        if self.training:
           ,,,,,
            return losses
        else:
            return outputs

So to access to the outputs for logging in e.g. TB/WB etc.. you need to put the model on eval. This is going to create this not clear deadlock in DDP.

So it we be nice if we could add some more direct notification about this failure case.

how to fix it

ShileiCao avatar Nov 20 '24 07:11 ShileiCao

Thanks @kenh1991! Can we repro this on an 8 gpu machine or does it need more GPUs to run? What is the run command to run it? hello, this is my run script. I have 1 milloion images, error occurred after training a fixed number of iteration. (batchsize=24, not get the error; batchsize=8 the error occurred)

CUDA_VISIBLE_DEVICES=0,1,2,3 /home/admin/anaconda3/envs/llm/bin/torchrun --master_port=9909 --nproc_per_node=4 tools/train.py -c configs/rtdetrv2/rtdetrv2_r18vd_sp1_120e_coco.yml -t ./outputs/rtdetrv2_r18vd_sp1_120e_coco_24102201/last.pth --use-amp --seed=0 2>&1 | tee log/train_log.txt

kenh1991 avatar Nov 20 '24 08:11 kenh1991

I believe I am experiencing a related, if not the same issue with DDP and compile.

https://github.com/slobodaapl/nvit

This contains a fully reproducible script to test with:

  1. run docker/build.sh
  2. copy settings.yaml to settings.local.yaml
  3. disable wandb to avoid having to set it up
  4. run bash docker_launcher.sh --num_gpus X where X is greater than 1
  5. test with compile enabled and disabled by changing the settings.. with compile enabled, training process hangs forever on the first forward pass, works fine with compile disabled

slobodaapl avatar Nov 23 '24 12:11 slobodaapl

run into the same issue. the error randomly happened

qsh-zh avatar Mar 09 '25 01:03 qsh-zh