llama.cpp icon indicating copy to clipboard operation
llama.cpp copied to clipboard

Store KV cache of computed prompts to disk to avoid re-compute in follow-up runs

Open ggerganov opened this issue 1 year ago • 9 comments

Idea from: https://github.com/ggerganov/llama.cpp/issues/23#issuecomment-1465308592

We can add a --cache_prompt flag that if added will dump the computed KV caches of the prompt processing to the disk in a file with name produced by the hash of the prompt. Next time you run, it will first check if we have stored KV cache for this hash and load it straight from disk instead of computing it.

Great task for contributing to the project!

ggerganov avatar Mar 12 '23 21:03 ggerganov