torchtitan icon indicating copy to clipboard operation
torchtitan copied to clipboard

Modify FLOPs in MFU calculation for casual mask when using FlashAttention.

Open Yuxin-CV opened this issue 1 year ago • 1 comments

Hi, I suggest we modify the FLOPs calculation in the MFU according to the FlashAttention benchmark script.

Specifically, the current calculation for the casual mask can exceed 100% MFU for seq_len = 16k (189 * 2 / 312 = 1.21), which is inaccurate. The FLOPs for the casual mask setting should be divided by 2 when using FlashAttention.

flash2_a100_fwd_bwd_benchmark

Yuxin-CV avatar May 17 '24 06:05 Yuxin-CV

There was some past discussion on this (https://github.com/pytorch/torchtitan/pull/280).

awgu avatar May 17 '24 13:05 awgu