lorax
lorax copied to clipboard
Add support for fp8 (H100)
Native support in PyTorch is experimental:
https://github.com/pytorch-labs/float8_experimental
We could consider adding this, or wait for official support.
Initial results using the PyTorch codebase are not good. About 10x decrease in throughput vs fp16 on H100.
https://github.com/predibase/lorax/tree/fp8
Will need to investigate transformer engine or dig into the PyTorch implementation in more detail. Definitely would appear that there is too much conversion between types happening at the moment (as opposed to everything happening in fp8 natively).