Megatron-LM icon indicating copy to clipboard operation
Megatron-LM copied to clipboard

opt:opt ltor masks

Open Baibaifan opened this issue 1 year ago • 0 comments

Problem:

In Megatron-LM, there is a memory bottleneck when using the reset attention mask to construct long sequences. The following code: (_get_ltor_masks_and_position_ids) image

When a seq_len consists of multiple short documents, there will be multiple values ​​in eod_index. Each value means that attention_mask needs to be accessed and loaded once, and the corresponding position is assigned 0. For example, in the 32k scenario, there are multiple assignments in extreme scenarios, which makes the data loading time very slow. As the sequence length increases, the number of positions that need to be assigned increases, and the time consumption will be longer. As shown in the figure below. image

Solution:

Perform a tensor access using the block_diag value. image

Baibaifan avatar Sep 24 '24 06:09 Baibaifan