transformers
transformers copied to clipboard
[generate] return past_key_values
What does this PR do?
Allows returning past_key_values
from generate
when use_cache=True
.
Like other returned values, past_key_values
are also returned as Tuple
, one element per generated token.
Fixes #17016
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint.
We'll just need to fix the failing tests now :-) Think you'll have to overwrite this "checking" function in the respective individual test files
Hey there, sorry to nag, but any chance of moving this along? Anything I can do to help?
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
(@patrickvonplaten @patil-suraj should I take over this PR? :) )
If ok for you @gante this would be amazing!
Hi, Thank you all for working on this feature! Is this going to be merged into the main branch soon?
@shunzh I haven't started working on it and it's hard to give estimates -- hopefully less than a month :)
Was this closed because it's now possible to retrieve past_key_values
or was there another reason?
@gilljon it is not closed :)
@gante I'm sorry for the confusion! Any idea when it will be merged?
hi @gante . Any idea when this will be merged? Interested in using it and building something on top of it. I'll happy to put on the finishing touches if needed too!
Hey! Just a friendly reminder. Any chance to get it merged soon?
I would absolutely love this feature! This would open up so much for me, because I have prompts like:
prompt = '''
Stuff
* <generate X>
* <generate Y>
Stuff
You said [X], and [Y] previously, now:
* <generate Z>
'''
This is so expensive without past_key_values
.
So this PR is now Merge-Conflicting, and I tried applying the patch but upon inspection, it's quite severely out of date now.
Is there another way to accomplish this?
I notice that model.forward
typically allows to return past_key_values
. But then I... have to make use of a sampling alg myself? Would this be the best way without needing upstream changes, and if so, how can I chain together model.forward
and a sampler?
EDIT: IIUC, generation_utils
is where model.generate
comes from, so the new place to make these edits is: https://github.com/huggingface/transformers/blob/0b192de1f353b0e04dad4813e02e2c672de077be/src/transformers/generation/utils.py#L1301
Is this ticket dead because some other technique exists already for returning and reusing past_key_values
? This is a killer feature.
The following PR is more up to date: https://github.com/huggingface/transformers/pull/25086
(deprecated in favor of #25086)
Hey folks 👋
#25086 was merged.
If you install from main
and add return_dict_in_generate=True
to generate
, past_key_values
will be part of the output, assuming your model is configured with use_cache=True
(the default).
You can then pass past_key_values
to generate
to continue generating!
I cant get it to work with intel neural_chat what vartion was this on?