[BUG] Encountering NaN gradients when using CUDA Graph
Describe the bug When using Megatron-Core v0.9.0 with CUDA Graphs enabled, NaN gradients are encountered during the backward computation. This issue does not occur when CUDA Graphs are disabled.
To Reproduce To reproduce this issue, follow these steps:
- Ensure that Megatron-Core v0.9.0 is installed and set up correctly in your environment.
- Configure the transformerconfig by setting enablecudagraph to True.
- Train a model or run a training script that involves backward computation.
- Observe the gradients during training to notice NaN values.
Expected behavior The expected behavior is for the model to train normally without encountering NaN gradients, even with CUDA Graphs enabled. The use of CUDA Graphs should not affect the correctness of the gradient computations.
Stack trace/logs
[rank5]: Traceback (most recent call last):
[rank5]: File "/workspace/Megatron-LM/pretrain_gpt.py", line 265, in <module>
[rank5]: pretrain(
[rank5]: File "/workspace/Megatron-LM/megatron/training/training.py", line 360, in pretrain
[rank5]: iteration, num_floating_point_operations_so_far = train(
[rank5]: File "/workspace/Megatron-LM/megatron/training/training.py", line 1262, in train
[rank5]: train_step(forward_step_func,
[rank5]: File "/workspace/Megatron-LM/megatron/training/training.py", line 730, in train_step
[rank5]: losses_reduced = forward_backward_func(
[rank5]: File "/workspace/Megatron-LM/megatron/core/pipeline_parallel/schedules.py", line 492, in forward_backward_no_pipelining
[rank5]: config.finalize_model_grads_func(
[rank5]: File "/workspace/Megatron-LM/megatron/core/distributed/finalize_model_grads.py", line 112, in finalize_model_grads
[rank5]: model_chunk.finish_grad_sync()
[rank5]: File "/workspace/Megatron-LM/megatron/core/distributed/distributed_data_parallel.py", line 422, in finish_grad_sync
[rank5]: bucket_group.finish_grad_sync()
[rank5]: File "/workspace/Megatron-LM/megatron/core/distributed/param_and_grad_buffer.py", line 302, in finish_grad_sync
[rank5]: self.start_grad_sync()
[rank5]: File "/workspace/Megatron-LM/megatron/core/distributed/param_and_grad_buffer.py", line 244, in start_grad_sync
[rank5]: self.check_for_nan_in_grad()
[rank5]: File "/workspace/Megatron-LM/megatron/core/distributed/param_and_grad_buffer.py", line 148, in check_for_nan_in_grad
[rank5]: assert not norm_is_nan, (
[rank5]: AssertionError: Rank 5: found NaN in local grad norm in backward pass before data-parallel communication collective. Device: 5, node: infra-train-3-ali-0
Environment (please complete the following information)
- Megatron-LM commit ID: 1b869f019af2c7aabf9c4deffe6eb64ebef88608
- PyTorch version: 2.3.0+cu121
- CUDA version: 12.4
- NCCL version: 2.20.5
- TransformerEngine version: 1.11.0+c27ee60
Same issue with TE release_v1.12 and TP >= 2
Marking as stale. No activity in 60 days.
Any update on this? Running into the same error now with the following setup: Megatron-LM@7ee599a NeMo@633cb60 TransformerEngine@ab4fd3c
and configs:
model.use_te_rng_tracker: True
model.enable_cuda_graph: True
Marking as stale. No activity in 60 days.
This issue was closed because it has been inactive for 7 days since being marked as stale.