wip: comment out prepend full_text
What does this PR do?
This is an investigation as to why using "return_full_text": true as a parameter in hitting the /generate endpoint produces a valid translation while running madlad400
Background: https://github.com/huggingface/text-generation-inference/issues/1416#issuecomment-2705165023
Fixes #1416
Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the contributor guideline, Pull Request section?
- [x] Was this discussed/approved via a Github issue or the forum? Please add a link to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the documentation guidelines, and here are tips on formatting docstrings.
- [ ] Did you write any new necessary tests?
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
return_full_text is a legacy option linked to transformers.pipelines initial implementation (something like 4+ years ago).
We had API dependencies on that behavior and therefore implemented here, it can mostly be disregarded nowadays.
What you did here is effectively deactivate its job. Which is really simply to decode all the decoder tokens, instead of only the new ones.
I was under the impression that T5 was a encoder/decoder model so I'm surprised to see a decoder-only model here.
I'm pretty sure the "bug" will simply end up being a tokenizer issue, where possibly you're hitting a very old T5 non flash version that doesn't support some flags used in those tokenizers.
Thanks for the fix for your use case, I'll leave it up for others to see, but if we're going to fix it, we need to find the root cause and fix that instead.
@Narsil thanks for looking into this and explaining it to me to give me a sense of what the root of the problem could be. it was nice to see madlad being served with good results and may just use this for the time being. looking forward to tracking this solution and learning more about this great project💪
Any movement on this one? Would be excellent to fully support MadLad as a translation T5 model.