TransformerEngine icon indicating copy to clipboard operation
TransformerEngine copied to clipboard

Warn when using fp8 weights + non-fp8 computation

Open kunlunl opened this issue 7 months ago • 2 comments

Description

Currently, TE's Linear, LayerNormLinear and GroupedLinear de-quantized the fp8 weights to bf16 when self.fp8 is False. This MR changes the de-quantization to raising an error, to prevent fp8 weights + non-fp8 computation.

Type of change

  • [ ] Documentation change (change only to the documentation, either a fix or a new content)
  • [x] Bug fix (non-breaking change which fixes an issue)
  • [ ] New feature (non-breaking change which adds functionality)
  • [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • [ ] Infra/Build change
  • [ ] Code refactoring

Changes

Checklist:

  • [x] I have read and followed the contributing guidelines
  • [x] The functionality is complete
  • [x] I have commented my code, particularly in hard-to-understand areas
  • [x] I have made corresponding changes to the documentation
  • [ ] My changes generate no new warnings
  • [x] I have added tests that prove my fix is effective or that my feature works
  • [x] New and existing unit tests pass locally with my changes

kunlunl avatar Apr 23 '25 08:04 kunlunl

OK, I changed it to use warnings instead of raising an error.

kunlunl avatar Apr 25 '25 08:04 kunlunl

/te-ci pytorch

timmoon10 avatar Apr 26 '25 02:04 timmoon10