TransformerEngine icon indicating copy to clipboard operation
TransformerEngine copied to clipboard

[PyTorch] FP8 AllToAll

Open yaox12 opened this issue 1 year ago • 0 comments

Description

Add a FP8AllToAll layer, which conducts cast_to_fp8 -> all_to_all in fp8 -> cast_from_fp8. We're getting about 5% end to end performance gain in Mixtral 8x7B and 8x22B training with the parallelism configs where alltoall happens on inter-node connections such as IB/RoCE.

Type of change

  • [ ] Documentation change (change only to the documentation, either a fix or a new content)
  • [ ] Bug fix (non-breaking change which fixes an issue)
  • [x] New feature (non-breaking change which adds functionality)
  • [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)

Changes

Please list the changes introduced in this PR:

  • Add a FP8AllToAll layer

Checklist:

  • [x] I have read and followed the contributing guidelines
  • [x] The functionality is complete
  • [x] I have commented my code, particularly in hard-to-understand areas
  • [x] I have made corresponding changes to the documentation
  • [x] My changes generate no new warnings
  • [x] I have added tests that prove my fix is effective or that my feature works
  • [x] New and existing unit tests pass locally with my changes

yaox12 avatar May 17 '24 08:05 yaox12