ChunkLlama icon indicating copy to clipboard operation
ChunkLlama copied to clipboard

How do I use it in vllm deployment

Open jchang98 opened this issue 1 year ago • 6 comments

How can I use this approach in vllm deployment without training,can you give me a specific example. thx

jchang98 avatar Mar 05 '24 16:03 jchang98

Thank you for bringing this to our attention. Unfortunately, the current version of vLLM does not support the return of attention scores. However, we are pleased to inform you that this functionality is planned in the next release of the software.

In the meantime, we are working diligently to implement paged attention—the key feature of vLLM—as well as Flash decoding. These enhancements aim to accelerate the generation process and decrease the GPU memory of the KV cache.

we appreciate your patience while we work on these developments. Stay tuned for updates.

ChenxinAn-fdu avatar Mar 06 '24 14:03 ChenxinAn-fdu

@ChenxinAn-fdu OK, thanks for your response

jchang98 avatar Mar 06 '24 14:03 jchang98

I have pushed the code for flash decoding and it significantly decreases the memory consumption for decoding with KV-cache. It may be helpful for you.

ChenxinAn-fdu avatar Apr 03 '24 06:04 ChenxinAn-fdu

looking forward to the support in vllm!

skyshine102 avatar Apr 16 '24 16:04 skyshine102

@ChenxinAn-fdu Dose vllm support DCA now? We'd like to use this feature in the deployment.

Shuai-Xie avatar May 08 '24 10:05 Shuai-Xie

@Shuai-Xie Hi, I left an issue in their official repo, but it seems that the current version of vllm only supports returning the output tensor without softmax_lse. We plan to implement it ourselves.

If you do not need continual batching, the current repo has implemented flash_decoding. You can use it for some preliminary experiments.

ChenxinAn-fdu avatar May 08 '24 10:05 ChenxinAn-fdu