llama.cpp
llama.cpp copied to clipboard
Store KV cache of computed prompts to disk to avoid re-compute in follow-up runs
Idea from: https://github.com/ggerganov/llama.cpp/issues/23#issuecomment-1465308592
We can add a --cache_prompt
flag that if added will dump the computed KV caches of the prompt processing to the disk in a file with name produced by the hash of the prompt. Next time you run, it will first check if we have stored KV cache for this hash and load it straight from disk instead of computing it.
Great task for contributing to the project!