TransformerEngine icon indicating copy to clipboard operation
TransformerEngine copied to clipboard

MXFP8 support in Userbuffers

Open timmoon10 opened this issue 7 months ago • 3 comments

Description

This PR adds basic support for overlapped tensor-parallel communication with MXFP8 data, with the following limitations:

  • Tensor dimensions must be multiples of 128
  • MXFP8 scaling factors are communicated with blocking NCCL collectives
  • The wgrad GEMM + grad output AG case is not optimal. In this case, we currently overlap dgrad GEMM + row-wise grad output AG (UB) + col-wise grad output AG (NCCL).

In the process, I've done some significant refactoring:

  • Removed quantized tensor logic from the Userbuffers communicators, so they only take responsibility for the Userbuffers buffer itself.
  • Added helper functions to expose the Userbuffers buffer as a quantized tensor and handle scaling factors
  • Restore support for Userbuffers in te.Sequential
  • Reorganize module implementation for clarity

Type of change

  • [ ] Documentation change (change only to the documentation, either a fix or a new content)
  • [ ] Bug fix (non-breaking change which fixes an issue)
  • [x] New feature (non-breaking change which adds functionality)
  • [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • [ ] Infra/Build change
  • [x] Code refactoring

Changes

  • Add MXFP8 support in Userbuffers
  • Move Userbuffers logic for quantized tensors out of UB communicator and into helper functions
  • Restore Userbuffers support in te.Sequential

Checklist:

  • [x] I have read and followed the contributing guidelines
  • [ ] The functionality is complete
  • [x] I have commented my code, particularly in hard-to-understand areas
  • [ ] I have made corresponding changes to the documentation
  • [ ] My changes generate no new warnings
  • [x] I have added tests that prove my fix is effective or that my feature works
  • [ ] New and existing unit tests pass locally with my changes

timmoon10 avatar Apr 22 '25 19:04 timmoon10

/te-ci L1

timmoon10 avatar Apr 24 '25 02:04 timmoon10

/te-ci pytorch L1

timmoon10 avatar Apr 25 '25 04:04 timmoon10

/te-ci pytorch L1

timmoon10 avatar Apr 26 '25 01:04 timmoon10

Have we checked the implementation with zero tolerance tensor dumps? I still have copies in my drive.

zhongbozhu avatar Apr 28 '25 17:04 zhongbozhu

/te-ci pytorch L1

timmoon10 avatar Apr 28 '25 22:04 timmoon10

/te-ci L1

timmoon10 avatar May 01 '25 19:05 timmoon10

/te-ci pytorch L1

timmoon10 avatar May 02 '25 21:05 timmoon10

/te-ci pytorch L1

timmoon10 avatar May 07 '25 03:05 timmoon10

/te-ci pytorch L1

timmoon10 avatar May 07 '25 06:05 timmoon10

/te-ci pytorch L1

timmoon10 avatar May 07 '25 07:05 timmoon10