SnapKV icon indicating copy to clipboard operation
SnapKV copied to clipboard

Question: is key_state_compressed used for inference?

Open jq-wei opened this issue 1 year ago • 1 comments

Hi,

Thanks for the great contribution!

I have a question about the usage of key_states_compress. If I understand correctly, key_states_compress is the topk token (clusters) from prompt (in prefilling stage). Then during inference, new query should only calculate attention with key_states_compress + some_newly_generated_key_states. However, I see flash-attn use the full prompt's key_states, and key_states_compress is not used. Is this supposed to be like this, or I miss anything?

Thank you!

jq-wei avatar Nov 20 '24 08:11 jq-wei

Especially, after prefilling, there is one attention loop for seq_len - (self.max_capacity_prompt) +1 many tokens, what is this for?

After this, decoding starts, but seems using the full KV cache.

jq-wei avatar Nov 20 '24 10:11 jq-wei