llm icon indicating copy to clipboard operation
llm copied to clipboard

Medusa Speculative Decoding

Open someone13574 opened this issue 2 years ago • 1 comments

Recently there was a project called Medusa which was released. It basically trains more lm_head's that instead of predicting the next token, they predict the token n+2, n+3, and n+4 before generating a tree of possible combinations of top-k possibilities for the upcoming tokens and evaluating them all at once with some clever masking and selecting one of the best ones. They get ~2x speedup and it looks like they are planning to integrate into llama.cpp, so I thought it would be a good fit for this project as well.

Links: Blog, Implementation, Models

someone13574 avatar Sep 11 '23 23:09 someone13574

Ref to llama.cpp issue https://github.com/ggerganov/llama.cpp/issues/3137

someone13574 avatar Sep 12 '23 18:09 someone13574