lightllm
lightllm copied to clipboard
Is there any comparison of the effects related to token attention? For example, compare with page attention
If there's a paper or other proof, that would be even better.
Not only the performance of throughput or the speed of single inference, what I'm referring to is the content of the model inference effect, such as accuracy and other metrics.
@skykiseki The paper is being written. and we have some kernel write by cuda c that is faster than page attention in some cases.