Liger-Kernel icon indicating copy to clipboard operation
Liger-Kernel copied to clipboard

[AMD] Implement Flash Attention in Triton to enable transformers to run with Flash Attention on AMD GPUs.

Open ByronHsu opened this issue 1 year ago • 4 comments
trafficstars

🚀 The feature, motivation and pitch

The official implementation of flash attention is in CUDA, so in AMD GPUs, users cannot easily use flash attention on transformers to training LLM. With the supports, we can unlock many exciting use cases on AMD. The code is already there at https://triton-lang.org/main/getting-started/tutorials/06-fused-attention.html.

Another option is to use flex-attn from PyTorch team, which uses torch.compile to optimize on top of existing handwritten triton kernels

Alternatives

No response

Additional context

No response

ByronHsu avatar Aug 27 '24 19:08 ByronHsu