llama.cpp
llama.cpp copied to clipboard
llama : combined beam search + grammar sampling strategy
This feature was proposed by @spion in https://github.com/ggerganov/llama.cpp/issues/2813#issuecomment-1694390583
In some cases, its useful to do constrained evaluation of logits based on a union of possible text values, then pick the sum { logits } (i.e. product(probabilities)) that gives the most probable outcome overall.
E.g. template (using MS guidance)
{{#select 'armor'}}leather{{or}}chainmail{{or}}plate{{/select}}
To definitely make the best choice, we'd need to calculate the probability of all 3 token sequences. Its easy if all the choices map to a single token, but with multiple tokens we'd need not just parallel generation but parallel logit evaluation of multiple possible paths.
If we go greedy, we might get suboptimal results in cases multiple choices start with the same logit.
It should be possible to implement this by combining the existing beam search and grammar sampling features. See the discussion in the referenced comment for more info
Hi team, have some small experience with beam search and I think that I can help, can I work on this PR @ggerganov
Sure, let us know if you make any progress and make sure to check the comments in the referenced issue
Sure @ggerganov , beside that, is there anything I have to notice? I'm using an Apple Silicon for development
Nothing specific comes to mind. I'll recommend writing a separate C++ example, similar to #2926 with extensive logging so we can debug what is being generated. If you open a draft PR, we can give recommendations during the development, but you don't have to if you prefer it that way. Also, get familiar with the existing beam-search example and the grammar-related sampling options
Hi @ggerganov, to be honest, it's quite hard to start config and debug the project. Can we contact on some channel to discuss about how to start? If it does not existed, it'd like to write that document down to, so it will benefit new contributors.
FYI, I know how to code C++, but not many experience on building and shipping C++ project, maybe that's also an issue, too.
@ggerganov a bunch of these cool thee toys (speculative exec, beam search) seem to be landing in either main or separate executables in examples.
Do you intend to push for some consolidation of this functionality all into main at some point?
Hi! What do you think I should do (implementation wise) to speed up grammar-constrained generation? I am currently generating a specific JSON object with fixed keys using Mistral, and was wondering if I could somehow use the model to only predict the unknown values, based on some prompt+input, exactly like MS guidance. In my specific use case, this would approximately halve the amount of tokens that need to be generated.
I was thinking about looking ahead one character at a time and, as long as there is exactly one option, accept that character and continue forward. This is to say, give the control back to the model as soon as we need to branch (which tends to happen when filling the values of this specific JSON).
I didn't feel like opening another issue, as this one seemed closely related. Also, this has already been tangentially discussed in e.g. https://github.com/ggerganov/llama.cpp/pull/1773#issuecomment-1585805739 (Tobias Lütke)
I wonder if there is an optimization where the next token is already known form the grammar we could skip the inference and just add it? In many types of grammars like json or html that could really speed up generation
The thing with "the next token is already known" is that some tokens share prefixes, so many of them could be valid simultaneously under some grammar, thus I think it would be better to iterate one character at a time until there's a branching, and just then tokenize and batch decode.
Thanks in advance for any thoughts or suggestions!
One simple workaround is to use the speculative example with a very tiny draft model (< 1B) and constrain it with grammar. This is not optimal, but if the tiny model drafting is negligible it will serve as a way to "queue" the deterministic tokens from the grammar.
Any news on this ? Also what @viantirreau suggested would be top notch.
Hi @nhhung1810, do you need any assistance with this? I’ve put together a quick prototype using Hugging Face Transformers over the weekend. While it’s been a while since I last worked extensively with C++, I do have a few years of experience and would be happy to help. Additionally, I have some ideas for faster token sampling that could follow once this feature is implemented. Let me know if I can contribute!
@tom-010 Hi feel free to tackle it : D
I'l give it a spin next weekend :-)
FYI they added an implementation in 2023 https://github.com/ggml-org/llama.cpp/pull/2267 and then removed it in 2024: https://github.com/ggml-org/llama.cpp/pull/7736 Not sure about the status of that project.