gpt-fast icon indicating copy to clipboard operation
gpt-fast copied to clipboard

Will these optimization integrate into hf's code?

Open lucasjinreal opened this issue 1 year ago • 7 comments

so that every one can use it out-of-box?

lucasjinreal avatar Dec 01 '23 07:12 lucasjinreal

Most of these features are already supported in Lit-GPT (if you're looking for finetuning LLMs) and more of this will be supported soon. You can use LLMs from HF model hub.

aniketmaurya avatar Dec 01 '23 11:12 aniketmaurya

Thanks for the interest ! We already support most of the optimization described here:

SunMarc avatar Dec 01 '23 17:12 SunMarc

@SunMarc I think there might still be some gaps in how the kv-cache is handled during inference. Specifically, the link you sent is about vision models, not text generation.

We should chat more about this - i'd love to see the techniques here integrated.

Chillee avatar Dec 01 '23 18:12 Chillee

Yes, absolutely! cc @younesbelkada for visibility

SunMarc avatar Dec 01 '23 19:12 SunMarc

These opt should already in hf. Moreover, some specific opt made for hardware like writing your cuda knerl for GPTQ and paged attention (e.g. flash_attn2) would make inference even faster.

https://github.com/turboderp/exllamav2 has bench marked llama-7b with 190+ t/s on single 3090Ti which matches this repo on 8xA100, but 3090Ti is only about 1/3 flops of a single A100. So hardware opt also plays as another drive.

yhyu13 avatar Dec 03 '23 08:12 yhyu13

Hi, does torch.complie works with AWQ?

(seems hf already supports AWQ, but quantization way might not same as this repo)

How to enable speculative decoding in hf?

lucasjinreal avatar Dec 04 '23 02:12 lucasjinreal

@yhyu13

https://github.com/turboderp/exllamav2 has bench marked llama-7b with 190+ t/s on single 3090Ti which matches this repo on 8xA100, but 3090Ti is only about 1/3 flops of a single A100.

To be clear, the benchmark on this repo is at 197 t/s on a single A100 with a groupsize of 32, while exllamav2 is running a single 4090 with a groupsize of 128.

Still certainly very good results from exllamav2 :)

Chillee avatar Dec 04 '23 19:12 Chillee