TransformerEngine
TransformerEngine copied to clipboard
[Common/PyTorch] Grouped GEMM via multi-stream cuBLAS
Description
Grouped GEMM for fp32/bf16/fp16 via multi-stream cuBLAS. This is for MoE training.
I'll add FP8 support and a GroupedLinear layer in future PRs.
Type of change
- [ ] Documentation change (change only to the documentation, either a fix or a new content)
- [ ] Bug fix (non-breaking change which fixes an issue)
- [x] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
Changes
Please list the changes introduced in this PR:
- Add a multi-stream cuBLAS based Grouped GEMM implementation and the corresponding PyTorch binding.
Checklist:
- [x] I have read and followed the contributing guidelines
- [x] The functionality is complete
- [x] I have commented my code, particularly in hard-to-understand areas
- [x] I have made corresponding changes to the documentation
- [x] My changes generate no new warnings
- [x] I have added tests that prove my fix is effective or that my feature works
- [x] New and existing unit tests pass locally with my changes