TransformerEngine icon indicating copy to clipboard operation
TransformerEngine copied to clipboard

[PyTorch] Fused dbias-cast-transpose in bias operation

Open timmoon10 opened this issue 1 year ago • 2 comments

Description

This PR modifies the bias operation (see the operation-based API in https://github.com/NVIDIA/TransformerEngine/pull/707) so that it can launch a fused kernel for computing bgrad, FP8 dgrad, and FP8 dgrad transpose.

Once https://github.com/NVIDIA/TransformerEngine/pull/1164 is merged, this could be extended to call a fused kernel for dgelu, bgrad, FP8 dgrad, and FP8 dgrad transpose. However, it may be better to defer that to a followup PR since it will require some infrastructure changes (e.g. we want to apply different fusions in the FP8 and non-FP8 cases).

Type of change

  • [ ] Documentation change (change only to the documentation, either a fix or a new content)
  • [ ] Bug fix (non-breaking change which fixes an issue)
  • [x] New feature (non-breaking change which adds functionality)
  • [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • [ ] Infra/Build change
  • [ ] Code refractor

Changes

  • Fused dbias-cast-transpose kernel in bias operation

Checklist:

  • [x] I have read and followed the contributing guidelines
  • [x] The functionality is complete
  • [x] I have commented my code, particularly in hard-to-understand areas
  • [x] I have made corresponding changes to the documentation
  • [ ] My changes generate no new warnings
  • [x] I have added tests that prove my fix is effective or that my feature works
  • [x] New and existing unit tests pass locally with my changes

timmoon10 avatar Sep 06 '24 19:09 timmoon10

/te-ci pytorch

timmoon10 avatar Sep 06 '24 19:09 timmoon10

/te-ci pytorch

timmoon10 avatar Sep 26 '24 03:09 timmoon10

/te-ci pytorch

timmoon10 avatar Oct 19 '24 00:10 timmoon10

/te-ci pytorch

timmoon10 avatar Nov 06 '24 01:11 timmoon10