TransformerEngine
TransformerEngine copied to clipboard
[PyTorch] Userbuffers support in operation-based API
Description
This PR adds basic support in the linear operation for using Userbuffers to overlap tensor-parallel communication with GEMMs. This is implemented as fused operations:
model = te.ops.Sequential(
te.ops.BasicLinear(...),
te.ops.Bias(...),
te.ops.ReduceScatter(...),
) # Fused into UserbuffersForwardLinear
I've tried to avoid touching the core UB infrastructure in transformer_engine/pytorch/module/base.py, so I've kept the messy API and hackily worked around some bugs. This feature should be considered experimental and unstable.
Type of change
- [ ] Documentation change (change only to the documentation, either a fix or a new content)
- [ ] Bug fix (non-breaking change which fixes an issue)
- [x] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [ ] Infra/Build change
- [ ] Code refractor
Changes
- Add fused operation for linear forward with Userbuffers
- Add fused operation for linear backward with Userbuffers
Checklist:
- [x] I have read and followed the contributing guidelines
- [x] The functionality is complete
- [x] I have commented my code, particularly in hard-to-understand areas
- [x] I have made corresponding changes to the documentation
- [ ] My changes generate no new warnings
- [x] I have added tests that prove my fix is effective or that my feature works
- [x] New and existing unit tests pass locally with my changes
/te-ci pytorch
/te-ci pytorch
/te-ci pytorch
/te-ci pytorch
/te-ci L1
/te-ci pytorch jax paddle L1
/te-ci pytorch L1