TransformerEngine
TransformerEngine copied to clipboard
[PyTorch] Fused dbias-cast-transpose in bias operation
Description
This PR modifies the bias operation (see the operation-based API in https://github.com/NVIDIA/TransformerEngine/pull/707) so that it can launch a fused kernel for computing bgrad, FP8 dgrad, and FP8 dgrad transpose.
Once https://github.com/NVIDIA/TransformerEngine/pull/1164 is merged, this could be extended to call a fused kernel for dgelu, bgrad, FP8 dgrad, and FP8 dgrad transpose. However, it may be better to defer that to a followup PR since it will require some infrastructure changes (e.g. we want to apply different fusions in the FP8 and non-FP8 cases).
Type of change
- [ ] Documentation change (change only to the documentation, either a fix or a new content)
- [ ] Bug fix (non-breaking change which fixes an issue)
- [x] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [ ] Infra/Build change
- [ ] Code refractor
Changes
- Fused dbias-cast-transpose kernel in bias operation
Checklist:
- [x] I have read and followed the contributing guidelines
- [x] The functionality is complete
- [x] I have commented my code, particularly in hard-to-understand areas
- [x] I have made corresponding changes to the documentation
- [ ] My changes generate no new warnings
- [x] I have added tests that prove my fix is effective or that my feature works
- [x] New and existing unit tests pass locally with my changes
/te-ci pytorch
/te-ci pytorch
/te-ci pytorch
/te-ci pytorch