llama.cpp icon indicating copy to clipboard operation
llama.cpp copied to clipboard

llama : combined beam search + grammar sampling strategy

Open ggerganov opened this issue 2 years ago • 9 comments

This feature was proposed by @spion in https://github.com/ggerganov/llama.cpp/issues/2813#issuecomment-1694390583

In some cases, its useful to do constrained evaluation of logits based on a union of possible text values, then pick the sum { logits } (i.e. product(probabilities)) that gives the most probable outcome overall.

E.g. template (using MS guidance)

{{#select 'armor'}}leather{{or}}chainmail{{or}}plate{{/select}}

To definitely make the best choice, we'd need to calculate the probability of all 3 token sequences. Its easy if all the choices map to a single token, but with multiple tokens we'd need not just parallel generation but parallel logit evaluation of multiple possible paths.

If we go greedy, we might get suboptimal results in cases multiple choices start with the same logit.

It should be possible to implement this by combining the existing beam search and grammar sampling features. See the discussion in the referenced comment for more info

ggerganov avatar Aug 31 '23 06:08 ggerganov

Hi team, have some small experience with beam search and I think that I can help, can I work on this PR @ggerganov

nhhung1810 avatar Sep 03 '23 06:09 nhhung1810

Sure, let us know if you make any progress and make sure to check the comments in the referenced issue

ggerganov avatar Sep 03 '23 07:09 ggerganov

Sure @ggerganov , beside that, is there anything I have to notice? I'm using an Apple Silicon for development

nhhung1810 avatar Sep 03 '23 07:09 nhhung1810

Nothing specific comes to mind. I'll recommend writing a separate C++ example, similar to #2926 with extensive logging so we can debug what is being generated. If you open a draft PR, we can give recommendations during the development, but you don't have to if you prefer it that way. Also, get familiar with the existing beam-search example and the grammar-related sampling options

ggerganov avatar Sep 03 '23 07:09 ggerganov

Hi @ggerganov, to be honest, it's quite hard to start config and debug the project. Can we contact on some channel to discuss about how to start? If it does not existed, it'd like to write that document down to, so it will benefit new contributors.

FYI, I know how to code C++, but not many experience on building and shipping C++ project, maybe that's also an issue, too.

nhhung1810 avatar Sep 03 '23 13:09 nhhung1810

@ggerganov a bunch of these cool thee toys (speculative exec, beam search) seem to be landing in either main or separate executables in examples.

Do you intend to push for some consolidation of this functionality all into main at some point?

kiratp avatar Sep 08 '23 17:09 kiratp

Hi! What do you think I should do (implementation wise) to speed up grammar-constrained generation? I am currently generating a specific JSON object with fixed keys using Mistral, and was wondering if I could somehow use the model to only predict the unknown values, based on some prompt+input, exactly like MS guidance. In my specific use case, this would approximately halve the amount of tokens that need to be generated.

I was thinking about looking ahead one character at a time and, as long as there is exactly one option, accept that character and continue forward. This is to say, give the control back to the model as soon as we need to branch (which tends to happen when filling the values of this specific JSON).

I didn't feel like opening another issue, as this one seemed closely related. Also, this has already been tangentially discussed in e.g. https://github.com/ggerganov/llama.cpp/pull/1773#issuecomment-1585805739 (Tobias Lütke)

I wonder if there is an optimization where the next token is already known form the grammar we could skip the inference and just add it? In many types of grammars like json or html that could really speed up generation

The thing with "the next token is already known" is that some tokens share prefixes, so many of them could be valid simultaneously under some grammar, thus I think it would be better to iterate one character at a time until there's a branching, and just then tokenize and batch decode.

Thanks in advance for any thoughts or suggestions!

viantirreau avatar Oct 25 '23 00:10 viantirreau

One simple workaround is to use the speculative example with a very tiny draft model (< 1B) and constrain it with grammar. This is not optimal, but if the tiny model drafting is negligible it will serve as a way to "queue" the deterministic tokens from the grammar.

ggerganov avatar Oct 25 '23 07:10 ggerganov

Any news on this ? Also what @viantirreau suggested would be top notch.

ExtReMLapin avatar Jul 21 '24 11:07 ExtReMLapin

Hi @nhhung1810, do you need any assistance with this? I’ve put together a quick prototype using Hugging Face Transformers over the weekend. While it’s been a while since I last worked extensively with C++, I do have a few years of experience and would be happy to help. Additionally, I have some ideas for faster token sampling that could follow once this feature is implemented. Let me know if I can contribute!

tom-010 avatar Nov 18 '24 06:11 tom-010

@tom-010 Hi feel free to tackle it : D

nhhung1810 avatar Nov 18 '24 07:11 nhhung1810

I'l give it a spin next weekend :-)

tom-010 avatar Nov 18 '24 10:11 tom-010

FYI they added an implementation in 2023 https://github.com/ggml-org/llama.cpp/pull/2267 and then removed it in 2024: https://github.com/ggml-org/llama.cpp/pull/7736 Not sure about the status of that project.

WilliamTambellini avatar May 19 '25 23:05 WilliamTambellini