TransformerEngine
TransformerEngine copied to clipboard
Warn when using fp8 weights + non-fp8 computation
Description
Currently, TE's Linear, LayerNormLinear and GroupedLinear de-quantized the fp8 weights to bf16 when self.fp8 is False. This MR changes the de-quantization to raising an error, to prevent fp8 weights + non-fp8 computation.
Type of change
- [ ] Documentation change (change only to the documentation, either a fix or a new content)
- [x] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [ ] Infra/Build change
- [ ] Code refactoring
Changes
Checklist:
- [x] I have read and followed the contributing guidelines
- [x] The functionality is complete
- [x] I have commented my code, particularly in hard-to-understand areas
- [x] I have made corresponding changes to the documentation
- [ ] My changes generate no new warnings
- [x] I have added tests that prove my fix is effective or that my feature works
- [x] New and existing unit tests pass locally with my changes
OK, I changed it to use warnings instead of raising an error.
/te-ci pytorch