gemma.cpp
gemma.cpp copied to clipboard
Add Self-Extend to the gemma.cpp
Hi team, I checked the locallama and found that gemma can work well with the Self-Extend method. It would be awesome if this technique could be added to the gemma.cpp. References:
This seems interesting and quite doable. I'll need to have a closer look at the paper and revisit tomorrow.
On the tactical side, we'll want to tidy up the APIs + dispatch mechanisms multiple alternative inference graphs. The dispatch mechanisms are ok for the limited set of 7B/2B x IT/PT but could use a refactor before we add more combinations of inference paths.
Glad to see that our method works well with Gemma!! Our python implementation is here https://github.com/datamllab/LongLM/blob/master/gemma_self_extend_patch.py and the llama.cpp implementation is here https://github.com/ggerganov/llama.cpp/blob/cb49e0f8c906e5da49e9f6d64a57742a9a241c6a/examples/main/main.cpp#L569
We are glad to help!!!
Author here, glad to answer any questions about details for our work.
If someone wants to take a stab at this as a flag, happy to have a look at the PR / provide suggestions (add yourself as the assignee for this issue).
There's an enhancement that i think would improve the usefulness of this is %save %load commands for KV cache state. Using the blob store headers, I think this wouldn't be that hard to implement. Might be a good first issue for someone who's comfortable with the codebase. I think this would lead to a lot of use cases that would otherwise be impractical.
+1, we'd welcome a pull request for this, also happy to discuss.
@austinvhuang @jan-wassenberg I'd like to take a stab at this, if you nobody has objections?
My background: I've been trying to break into this field, and I've had the pleasure of collaborating with the Google Team in the past for TFLite Support repository.
Nice, sounds great, we'd be happy to collaborate with you, discuss and review :)
FYI the KVCache internals will likely change a bit to use RowVectorBatch at some point, but no big deal.
Is there anything in the current code that you think will cause difficulties?
InferenceArgs
is probably a good place to add the flag.
Perfect, sorry for the delay, I can spin something up over the weekend. Please allow some time to read the codebase and get back with a proposal
Had a first pass through the paper, the paper has proven its ability only on RoPE position encodings, and the theory is supported only for relative position encodings. i.e. there's no proof of it working if we were training via sinusoidal positional encoding.
Shouldn't we have some kind of check for this?
cc: @Mooler0410 @ahxt