text-generation-inference icon indicating copy to clipboard operation
text-generation-inference copied to clipboard

wip: comment out prepend full_text

Open jrc2139 opened this issue 9 months ago • 2 comments

What does this PR do?

This is an investigation as to why using "return_full_text": true as a parameter in hitting the /generate endpoint produces a valid translation while running madlad400

Background: https://github.com/huggingface/text-generation-inference/issues/1416#issuecomment-2705165023

Fixes #1416

Before submitting

  • [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • [ ] Did you read the contributor guideline, Pull Request section?
  • [x] Was this discussed/approved via a Github issue or the forum? Please add a link to it if that's the case.
  • [ ] Did you make sure to update the documentation with your changes? Here are the documentation guidelines, and here are tips on formatting docstrings.
  • [ ] Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.

jrc2139 avatar Mar 07 '25 00:03 jrc2139

return_full_text is a legacy option linked to transformers.pipelines initial implementation (something like 4+ years ago). We had API dependencies on that behavior and therefore implemented here, it can mostly be disregarded nowadays.

What you did here is effectively deactivate its job. Which is really simply to decode all the decoder tokens, instead of only the new ones.

I was under the impression that T5 was a encoder/decoder model so I'm surprised to see a decoder-only model here.

I'm pretty sure the "bug" will simply end up being a tokenizer issue, where possibly you're hitting a very old T5 non flash version that doesn't support some flags used in those tokenizers.

Thanks for the fix for your use case, I'll leave it up for others to see, but if we're going to fix it, we need to find the root cause and fix that instead.

Narsil avatar Mar 10 '25 11:03 Narsil

@Narsil thanks for looking into this and explaining it to me to give me a sense of what the root of the problem could be. it was nice to see madlad being served with good results and may just use this for the time being. looking forward to tracking this solution and learning more about this great project💪

jrc2139 avatar Mar 10 '25 11:03 jrc2139

Any movement on this one? Would be excellent to fully support MadLad as a translation T5 model.

josh-at-straker avatar Jun 18 '25 02:06 josh-at-straker