transformers icon indicating copy to clipboard operation
transformers copied to clipboard

PreTrainedTokenizerFast with tokenizer object is acting on original tokenizer object

Open YBooks opened this issue 3 years ago • 2 comments

System Info

  • transformers version: 4.21.0
  • Platform: Windows-10-10.0.19041-SP0
  • Python version: 3.9.2
  • Huggingface_hub version: 0.8.1
  • PyTorch version (GPU?): 1.8.1+cpu (False)
  • Tensorflow version (GPU?): not installed (NA)
  • Flax version (CPU?/GPU?/TPU?): not installed (NA)
  • Jax version: not installed
  • JaxLib version: not installed
  • Using GPU in script?: no
  • Using distributed or parallel set-up in script?: no

Who can help?

@LysandreJik

Information

  • [ ] The official example scripts
  • [X] My own modified scripts

Tasks

  • [ ] An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • [X] My own task or dataset (give details below)

Reproduction

  • To reproduce this error, we can create a tokenizer and try to wrap it in the PreTrainedTokenizerFast
from tokenizers import Tokenizer, models, normalizers, pre_tokenizers, trainers
data = [
    "My first sentence",
    "My second sentence",
    "My third sentence is a bit longer",
    "My fourth sentence is longer than the third one"
]

tokenizer = Tokenizer(models.WordLevel(unk_token="<unk>"))
trainer = trainers.WordLevelTrainer(vocab_size=10, special_tokens=["<unk>", "<pad>"])
tokenizer.pre_tokenizer = pre_tokenizers.Whitespace()
tokenizer.train_from_iterator(data, trainer=trainer)

tokenizer.enable_padding(pad_token="<pad>", pad_id=tokenizer.token_to_id("<pad>"))
tokenizer.enable_truncation(max_length=5)
print(tokenizer.encode(data[-1]).ids, tokenizer.padding)

This gives an output with len 5 and an explicit padding object

  • In the other hand if we load our tokenizer in the PreTrainedTokenizerFast class and print the same thing like before.
from transformers import PreTrainedTokenizerFast

fast_tokenizer = PreTrainedTokenizerFast(tokenizer_object=tokenizer)
fast_tokenizer(data)

print(tokenizer.encode(data[-1]).ids, tokenizer.padding)

This gives an output with len > 5 and None in padding

Expected behavior

The expected behavior should be the same with tokenizer before loading it in the PreTrainedTokenizerFast wrapper. It should not impact the padding and the truncation part

YBooks avatar Aug 01 '22 18:08 YBooks

cc @SaulLu

LysandreJik avatar Aug 09 '22 07:08 LysandreJik

Hi @YBooks

Thank you very much for the detailed issue :hugs: !

I see that you have already proposed a fix that has been merged and that solves the problem you are pointing out. If you are happy with it, is it ok if we close this issue?

SaulLu avatar Aug 09 '22 17:08 SaulLu

Hey @SaulLu Yes sure. My pleasure

YBooks avatar Aug 10 '22 12:08 YBooks

@YBooks , @SaulLu , @sgugger can we reopen this issue, since https://github.com/huggingface/transformers/pull/18408 creates another one ?

maclandrol avatar Sep 19 '22 00:09 maclandrol