TransformerEngine
TransformerEngine copied to clipboard
[PyTorch] Bunch of memory management fixes
Description
This PR changes:
- adds tensor removing after prepare for saving for the internal tensors,
- cached weight is changed not to be internal tensor - otherwise it would be removed between the microbatches,
- test that catches the error above,
- scaling factors are saved for backward - it enables offloading them,
- enables weight offloading test for MXFP8 - it was disabled because of the problems with the scaling factors offloading.
- fixes .clear() - removes also scale_inv tensors and works correctly for .*TensorBase
This PR breaks compatibility with the Thunder
Type of change
- [ ] Documentation change (change only to the documentation, either a fix or a new content)
- [x] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [ ] Infra/Build change
- [ ] Code refactoring
Checklist:
- [x] I have read and followed the contributing guidelines
- [x] The functionality is complete
- [x] I have commented my code, particularly in hard-to-understand areas
- [x] I have made corresponding changes to the documentation
- [x] My changes generate no new warnings
- [x] I have added tests that prove my fix is effective or that my feature works
- [x] New and existing unit tests pass locally with my changes
/te-ci pytorch
/te-ci pytorch
/te-ci pytorch
/te-ci pytorch
/te-ci pytorch
@guyueh1 offline confirmed that the NeMo/MCore sanity tests pass with this change, including parallelisms.
/te-ci pytorch
/te-ci pytorch
/te-ci pytorch
The Thunder integration bug is fixed with https://github.com/Lightning-AI/lightning-thunder/pull/1826.