LLM-Shearing icon indicating copy to clipboard operation
LLM-Shearing copied to clipboard

The Project is not implemented for 70B llama?

Open zhangzhenyu13 opened this issue 11 months ago • 7 comments

No GQA implementation is found, so the model is not capable to scale to 70B for composerLLAMA. Maybe we need design GQA and introduce head_z for wq and head_z_kv for wk and wv?

zhangzhenyu13 avatar Mar 08 '24 07:03 zhangzhenyu13

Hi, the modeling file currently does not support GQA, but should require minimal changes to support it. What you described should work perfectly :)

xiamengzhou avatar Mar 12 '24 01:03 xiamengzhou

It seems that we need a hierarchical pruning scheme for gqa, group pruning and head pruning inside group? Since we need to keep the number of heads in each group the same.

ZhiYuanZeng avatar Mar 12 '24 10:03 ZhiYuanZeng

It seems that we need a hierarchical pruning scheme for gqa, group pruning and head pruning inside group? Since we need to keep the number of heads in each group the same.

In order to make the pruned model be able to run tp, it would be better to keep the group num unchanged. We only need to prune the query heads for each group, thus maybe a layer_num * group_num* group_heads_query z_group_query need to initialized.

zhangzhenyu13 avatar Mar 12 '24 12:03 zhangzhenyu13

Pruning queries might cause the number of queries to be different in different groups. So maybe a group-based pruning is more reasonable? @zhangzhenyu13

xiamengzhou avatar Mar 12 '24 12:03 xiamengzhou

Could we share the mask of query-heads among different groups?

Pruning queries might cause the number of queries to be different in different groups. So maybe a group-based pruning is more reasonable? @zhangzhenyu13

ZhiYuanZeng avatar Mar 12 '24 13:03 ZhiYuanZeng

Pruning queries might cause the number of queries to be different in different groups. So maybe a group-based pruning is more reasonable? @zhangzhenyu13

Yes, your settings are right. We need to share z across groups.

zhangzhenyu13 avatar Mar 13 '24 04:03 zhangzhenyu13

Hi @zhangzhenyu13 I have some confusion. The author's composer llama file does not implement any GQA functionality. Did you implement GQA forward yourself? Which llama warehouse implementation version is better to refer to?

Longyichen avatar Apr 01 '24 14:04 Longyichen