OpenNMT-py icon indicating copy to clipboard operation
OpenNMT-py copied to clipboard

fp8 support

Open vince62s opened this issue 2 years ago • 6 comments

If someone is motivated, there could be some adaptation to support fp8 (on some hardware) using this new library:

https://github.com/NVIDIA/TransformerEngine

cc: @guillaumekln @francoishernandez

vince62s avatar Feb 02 '23 07:02 vince62s

Well, no hurry for RTX 4090, not ready yet.

Hi All,

First of all, I'm really sorry for the prolonged silence on this issue - I did not want to communicate anything before getting a full alignment internally. As noted in the RTX 4090 announcement and Ada whitepaper, Ada has FP8 TensorCore hardware. However, the software support for them is not currently available - e.g. there is no support for it exposed in cuBLASLt currently. The reason for it is that both the FP8 TC instruction as well as other features used in the fast FP8 GEMM kernels are different between Hopper and Ada (meaning a different set of kernels required for both architectures) and the Hopper support was prioritized. Once the FP8 support lands in CUDA and its libraries (tentatively scheduled for CUDA 12.1 in Q2), Transformer Engine will also fully support Ada.

read from here: https://github.com/NVIDIA/TransformerEngine/issues/15

vince62s avatar Feb 02 '23 07:02 vince62s

@vince62s cuda 12.1 released.. can Ada support be worked on?

oscarbg avatar Mar 01 '23 18:03 oscarbg

@oscarbg didn't work. cublasLtMatmul and cublasLtMatrixTransform still can't work for __nv_fp8_e4m3 and __nv_fp8_e4m3 on 4090 with newest cuda 12.1, date-2023.3.7. Can anyone work it out or it is just cuda not support issue?

AaronZLT avatar Mar 07 '23 05:03 AaronZLT

Great I'll give it a try when I get some time.

vince62s avatar Apr 28 '23 07:04 vince62s

I tried it but obviously it is not so easy to make it work in our scenario. https://github.com/NVIDIA/TransformerEngine/issues/230

vince62s avatar May 19 '23 07:05 vince62s