transformers icon indicating copy to clipboard operation
transformers copied to clipboard

T5Tokenizer Fast and Slow give different results with AddedTokens

Open patrickvonplaten opened this issue 3 years ago • 4 comments

When adding a new token to T5TokenizerFast and/or T5Tokenizer, we get different results for the tokenizers which is unexpected.

E.g. running the following code:

from transformers import AutoTokenizer, AddedToken

tok = AutoTokenizer.from_pretrained("t5-small", use_fast=False)
tok_fast = AutoTokenizer.from_pretrained("t5-small", use_fast=True)

tok.add_tokens("$$$")
tok_fast.add_tokens(AddedToken("$$$", lstrip=False))

prompt = "Hello what is going on $$$ no ? We should"

print("Slow")
print(tok.decode(tok(prompt).input_ids))

print("Fast")
print(tok_fast.decode(tok_fast(prompt).input_ids))

yields different results for each tokenizer

Slow
Hello what is going on $$$ no? We should</s>
Fast
Hello what is going on$$$ no? We should</s>

Environment info

  • transformers version: 4.18.0.dev0
  • Platform: Linux-5.15.15-76051515-generic-x86_64-with-glibc2.34
  • Python version: 3.9.7
  • Huggingface_hub version: 0.4.0.dev0
  • PyTorch version (GPU?): 1.10.2+cu102 (True)
  • Tensorflow version (GPU?): 2.8.0 (False)
  • Flax version (CPU?/GPU?/TPU?): 0.4.0 (cpu)
  • Jax version: 0.3.1
  • JaxLib version: 0.3.0

patrickvonplaten avatar Mar 22 '22 13:03 patrickvonplaten

cc @Narsil @SaulLu

patrickvonplaten avatar Mar 22 '22 13:03 patrickvonplaten

Hi, The behavior can be explained by the fact that the encode, splits on whitespace and ignores them, then the decoder uses Metaspace (which is for the spm behavior) which does not prefix things with spaces even on the added token. The spaces are supposed to already be contained within the tokens themselves.

We could have parity on this at least for sure !

But I am not sure who is right in that case, both decoded values look OK to me. The proposed AddedToken contains no information about the spaces so it's ok to no place one back by default (it would break things when added tokens are specifically intended for stuff not containing spaces). In that particular instance, because we're coming from a sentence with a space, ofc it makes more sense to put one back to recover the original string. But decode[999, 998] with 999="$(" and 998=")$" It's unclear to me if a user wants "$( )$" or "$()$" when decoded. (Just trying to take an plausible example where the answer is unclear.)

Narsil avatar Mar 23 '22 12:03 Narsil

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

github-actions[bot] avatar Apr 21 '22 15:04 github-actions[bot]

should this be reopened if it's not resolved yet?

wise-east avatar Jul 29 '22 22:07 wise-east

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

github-actions[bot] avatar Sep 01 '22 15:09 github-actions[bot]