TransformerEngine
TransformerEngine copied to clipboard
MXFP8 support in Userbuffers
Description
This PR adds basic support for overlapped tensor-parallel communication with MXFP8 data, with the following limitations:
- Tensor dimensions must be multiples of 128
- MXFP8 scaling factors are communicated with blocking NCCL collectives
- The wgrad GEMM + grad output AG case is not optimal. In this case, we currently overlap dgrad GEMM + row-wise grad output AG (UB) + col-wise grad output AG (NCCL).
In the process, I've done some significant refactoring:
- Removed quantized tensor logic from the Userbuffers communicators, so they only take responsibility for the Userbuffers buffer itself.
- Added helper functions to expose the Userbuffers buffer as a quantized tensor and handle scaling factors
- Restore support for Userbuffers in te.Sequential
- Reorganize module implementation for clarity
Type of change
- [ ] Documentation change (change only to the documentation, either a fix or a new content)
- [ ] Bug fix (non-breaking change which fixes an issue)
- [x] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [ ] Infra/Build change
- [x] Code refactoring
Changes
- Add MXFP8 support in Userbuffers
- Move Userbuffers logic for quantized tensors out of UB communicator and into helper functions
- Restore Userbuffers support in te.Sequential
Checklist:
- [x] I have read and followed the contributing guidelines
- [ ] The functionality is complete
- [x] I have commented my code, particularly in hard-to-understand areas
- [ ] I have made corresponding changes to the documentation
- [ ] My changes generate no new warnings
- [x] I have added tests that prove my fix is effective or that my feature works
- [ ] New and existing unit tests pass locally with my changes
/te-ci L1
/te-ci pytorch L1
/te-ci pytorch L1
Have we checked the implementation with zero tolerance tensor dumps? I still have copies in my drive.
/te-ci pytorch L1
/te-ci L1
/te-ci pytorch L1
/te-ci pytorch L1
/te-ci pytorch L1
/te-ci pytorch L1