flashinfer icon indicating copy to clipboard operation
flashinfer copied to clipboard

How to use low-bit KV Cache in flashinfer?

Open zhaoyang-star opened this issue 1 year ago • 7 comments

From the blog I noticed that FlashInfer implements low-precision attention kernels so that we can achieve nearly linear speedup to the compression ratio (~4x for 4bit, ~2x for 8bit). This feature is great! and I try to use it. But there is no demo or toy code about how to use it. Could you please share more details about it?

zhaoyang-star avatar Feb 18 '24 01:02 zhaoyang-star

I haven't exposed low-bit KV-Cache in PyTorch APIs (they are available in C++ APIs), will do it tmr :)

yzh119 avatar Feb 18 '24 03:02 yzh119

Glad to hear that! Cannot wait to try it out. I think quantizing KV Cache from float16/bfloat16 to 4-bits will need calibration. It will be better if the feature released with demo and benchmark results (latency, throughput or accuracy).

BTW, there is already someone trying to port flashinfer to vLLM (see #2772) to boost decode phase. I also ported FlashAttention to vLLM (see #2744) and plan to benchmark FA and flashinfer in vLLM framwork.

zhaoyang-star avatar Feb 18 '24 12:02 zhaoyang-star

Thanks for letting me know, it's interesting to see that FlashAttention starts supporting paged kv-cache.

yzh119 avatar Feb 18 '24 21:02 yzh119

It will be better if the feature released with demo and benchmark results (latency, throughput or accuracy).

You can check our manuscript: Atom: Low-bit Quantization for Efficient and Accurate LLM Serving.

yzh119 avatar Feb 18 '24 21:02 yzh119

PyTorch APIs for fp8 kv-cache are exposed in #156 .

I'm finalizing the int4/int8 fused-dequant attention kernels with some optimizations such as fast int4/int8-to-float16 conversions. I expect to merge these changes by this Thursday.

yzh119 avatar Mar 05 '24 15:03 yzh119

PyTorch APIs for fp8 kv-cache are exposed in #156 .

I'm finalizing the int4/int8 fused-dequant attention kernels with some optimizations such as fast int4/int8-to-float16 conversions. I expect to merge these changes by this Thursday.

Hi @yzh119 As mentioned in https://flashinfer.ai/2024/02/02/introduce-flashinfer.html.

Our next release will include the 4-bit fused dequantize+attention operators proposed in Atom and LoRA operators used in Punica.

When is Atom quantization expected to be fully integrated into FlashInfer? Is there a detailed timeline available? Thanks.

zhyncs avatar Mar 28 '24 12:03 zhyncs

Hi, is there any plan to integrate the 4-bit fused dequantize+attention operators proposed in Atom into FlashInfer? Looking forward for this new feature.

SherrySwift avatar Sep 14 '24 10:09 SherrySwift