matmulfreellm icon indicating copy to clipboard operation
matmulfreellm copied to clipboard

MLGRU

Open AACengineer opened this issue 1 year ago • 1 comments

From the computational formula of MLGRU, it is observed that the parallelism between tokens is disrupted during the prefill phase, whereas Transformer++ is able to maintain the parallelism between tokens, and I have two questions:

  1. latency in Figure 4->(d) means First token latency?
  2. And in Figure 4->(d) , Transfomer++ utilizes token parallelism?

AACengineer avatar Jun 21 '24 09:06 AACengineer

@AACengineer Hi, Transformer++ conducts decoding also in an autoregressive manner. During training, Transformer++ can be fully parallelized. However, we can also make use of the parallel scan to improve the token parallelism. And cuz the linear-time GRU requires much less FLOPs than self attn, our training efficiency can be much better. Also, GRU does not need KV cache, the decoding space complexity is O(1).

yzhangcs avatar Jun 21 '24 14:06 yzhangcs