TransformerEngine icon indicating copy to clipboard operation
TransformerEngine copied to clipboard

Don't save fp8 q/k/v/out tensors when using bf16 bprop

Open guyueh1 opened this issue 1 year ago • 1 comments

Description

When using FP8_DPA=1 NVTE_FP8_DPA_BWD=0, the backprop uses BF16 q/k/v/out tensors and the fp8 q/k/v/o are not used. So we should avoid saving them for backprop, which reduces the peak memory footprint.

Fixes # (issue)

Type of change

  • [ ] Documentation change (change only to the documentation, either a fix or a new content)
  • [ ] Bug fix (non-breaking change which fixes an issue)
  • [ ] New feature (non-breaking change which adds functionality)
  • [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • [ ] Infra/Build change
  • [ ] Code refractor

Changes

Please list the changes introduced in this PR:

  • Change A
  • Change B

Checklist:

  • [ ] I have read and followed the contributing guidelines
  • [ ] The functionality is complete
  • [ ] I have commented my code, particularly in hard-to-understand areas
  • [ ] I have made corresponding changes to the documentation
  • [ ] My changes generate no new warnings
  • [ ] I have added tests that prove my fix is effective or that my feature works
  • [ ] New and existing unit tests pass locally with my changes

guyueh1 avatar Aug 27 '24 17:08 guyueh1

/te-ci pytorch

ksivaman avatar Aug 27 '24 18:08 ksivaman