PiPPy icon indicating copy to clipboard operation
PiPPy copied to clipboard

[H100] local test C10D forward does not have tensor result equivalency (16% mismatch)

Open lessw2020 opened this issue 2 years ago • 0 comments

Using latest nightly (1109) and running on H100 server: running tests/local_test_c10d.py results in the final tensor comparison failing with 16% mismatch (appears to be rounding, largest diff is .0097).

torchrun --nproc-per-node 4 local_test_c10d.py
[2023-11-13 12:18:19,117] torch.distributed.run: [WARNING] 
[2023-11-13 12:18:19,117] torch.distributed.run: [WARNING] *****************************************
[2023-11-13 12:18:19,117] torch.distributed.run: [WARNING] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. 
[2023-11-13 12:18:19,117] torch.distributed.run: [WARNING] *****************************************
WARNING > Did not find any loss value from model output, your pipeline will be in inference mode. If you want your pipeline to be in training mode, please specify a loss value via `output_loss_value_spec`.
WARNING > Did not find any loss value from model output, your pipeline will be in inference mode. If you want your pipeline to be in training mode, please specify a loss value via `output_loss_value_spec`.
WARNING > Did not find any loss value from model output, your pipeline will be in inference mode. If you want your pipeline to be in training mode, please specify a loss value via `output_loss_value_spec`.
WARNING > Did not find any loss value from model output, your pipeline will be in inference mode. If you want your pipeline to be in training mode, please specify a loss value via `output_loss_value_spec`.
[rank3]:[W ProcessGroupNCCL.cpp:2302] Warning: 0TORCH_NCCL_AVOID_RECORD_STREAMS=1 has no effect for point-to-point collectives. (function operator())
[rank1]:[W ProcessGroupNCCL.cpp:2302] Warning: 0TORCH_NCCL_AVOID_RECORD_STREAMS=1 has no effect for point-to-point collectives. (function operator())
[rank2]:[W ProcessGroupNCCL.cpp:2302] Warning: 0TORCH_NCCL_AVOID_RECORD_STREAMS=1 has no effect for point-to-point collectives. (function operator())
[rank0]:[W ProcessGroupNCCL.cpp:2302] Warning: 0TORCH_NCCL_AVOID_RECORD_STREAMS=1 has no effect for point-to-point collectives. (function operator())
NCCL version 2.19.3+cuda12.1
[rank1]:[W ProcessGroupNCCL.cpp:2302] Warning: 0TORCH_NCCL_AVOID_RECORD_STREAMS=1 has no effect for point-to-point collectives. (function operator())
NCCL version 2.19.3+cuda12.1
[rank2]:[W ProcessGroupNCCL.cpp:2302] Warning: 0TORCH_NCCL_AVOID_RECORD_STREAMS=1 has no effect for point-to-point collectives. (function operator())
NCCL version 2.19.3+cuda12.1
Rank 0 completes
Rank 1 completes
Rank 3 completes
Rank 2 completes
Traceback (most recent call last):
  File "/data/users/less/pippy/test/local_test_c10d.py", line 121, in <module>
    main()
  File "/data/users/less/pippy/test/local_test_c10d.py", line 117, in main
    run_worker(args)
  File "/data/users/less/pippy/test/local_test_c10d.py", line 75, in run_worker
    torch.testing.assert_close(out, ref_out)
  File "/home/less/local/miniconda3/envs/trident/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1520, in assert_close
    raise error_metas[0].to_error(msg)
AssertionError: Tensor-likes are not close!

Mismatched elements: 88559 / 524288 (16.9%)
Greatest absolute difference: 0.009765625 at index (566, 103) (up to 1e-05 allowed)
Greatest relative difference: 1.0 at index (466, 13) (up to 1.3e-06 allowed)
[2023-11-13 12:18:44,148] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 1) local_rank: 3 (pid: 3929737) of binary: /home/less/local/miniconda3/envs/trident/bin/python
Traceback (most recent call last):
  File "/home/less/local/miniconda3/envs/trident/bin/torchrun", line 33, in <module>
    sys.exit(load_entry_point('torch', 'console_scripts', 'torchrun')())
  File "/home/less/local/miniconda3/envs/trident/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 351, in wrapper
    return f(*args, **kwargs)
  File "/home/less/local/miniconda3/envs/trident/lib/python3.10/site-packages/torch/distributed/run.py", line 812, in main
    run(args)
  File "/home/less/local/miniconda3/envs/trident/lib/python3.10/site-packages/torch/distributed/run.py", line 803, in run
    elastic_launch(
  File "/home/less/local/miniconda3/envs/trident/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 135, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/home/less/local/miniconda3/envs/trident/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 268, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: 
============================================================
local_test_c10d.py FAILED
------------------------------------------------------------
Failures:
  <NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
  time      : 2023-11-13_12:18:44
  host      : devgpu011.cco3.facebook.com
  rank      : 3 (local_rank: 3)
  exitcode  : 1 (pid: 3929737)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

lessw2020 avatar Nov 13 '23 20:11 lessw2020