lightseq icon indicating copy to clipboard operation
lightseq copied to clipboard

[Question] gptj, mpt support.

Open DongqiShen opened this issue 1 year ago • 0 comments

Hi, there. I looked through the readme file and didn't find out-of-the-box support for these models. Although they have a similar structure to GPT2, it is still relatively hard for an LLM engineer to write cuda. I tried Faster Transformer to speed up MOSS which is extremely fast and I look forward to using lightseq. Also, I think you should update the readMe file since I saw LLAMA supported which is very important because BAICHUAN have almost the same structure as llama, especially for Chinese OSS. Got a question here, is that possible to implement flash attention here to support more Nvidia cards like v100? I saw a collaborator comment that said it is not supported for v100 in the original implementation. From my naive understanding, flash attention is just an Engineering problem, and the key is sharing memory? Thanks for your great work.

DongqiShen avatar Jul 05 '23 10:07 DongqiShen