OpenNMT-py
OpenNMT-py copied to clipboard
fp8 support
If someone is motivated, there could be some adaptation to support fp8 (on some hardware) using this new library:
https://github.com/NVIDIA/TransformerEngine
cc: @guillaumekln @francoishernandez
Well, no hurry for RTX 4090, not ready yet.
Hi All,
First of all, I'm really sorry for the prolonged silence on this issue - I did not want to communicate anything before getting a full alignment internally. As noted in the RTX 4090 announcement and Ada whitepaper, Ada has FP8 TensorCore hardware. However, the software support for them is not currently available - e.g. there is no support for it exposed in cuBLASLt currently. The reason for it is that both the FP8 TC instruction as well as other features used in the fast FP8 GEMM kernels are different between Hopper and Ada (meaning a different set of kernels required for both architectures) and the Hopper support was prioritized. Once the FP8 support lands in CUDA and its libraries (tentatively scheduled for CUDA 12.1 in Q2), Transformer Engine will also fully support Ada.
read from here: https://github.com/NVIDIA/TransformerEngine/issues/15
@vince62s cuda 12.1 released.. can Ada support be worked on?
@oscarbg didn't work.
cublasLtMatmul
and cublasLtMatrixTransform
still can't work for __nv_fp8_e4m3 and __nv_fp8_e4m3 on 4090 with newest cuda 12.1, date-2023.3.7.
Can anyone work it out or it is just cuda not support issue?
Great I'll give it a try when I get some time.
I tried it but obviously it is not so easy to make it work in our scenario. https://github.com/NVIDIA/TransformerEngine/issues/230