transformers
transformers copied to clipboard
PreTrainedTokenizerFast with tokenizer object is acting on original tokenizer object
System Info
transformersversion: 4.21.0- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.9.2
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.8.1+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
Who can help?
@LysandreJik
Information
- [ ] The official example scripts
- [X] My own modified scripts
Tasks
- [ ] An officially supported task in the
examplesfolder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below)
Reproduction
- To reproduce this error, we can create a tokenizer and try to wrap it in the PreTrainedTokenizerFast
from tokenizers import Tokenizer, models, normalizers, pre_tokenizers, trainers
data = [
"My first sentence",
"My second sentence",
"My third sentence is a bit longer",
"My fourth sentence is longer than the third one"
]
tokenizer = Tokenizer(models.WordLevel(unk_token="<unk>"))
trainer = trainers.WordLevelTrainer(vocab_size=10, special_tokens=["<unk>", "<pad>"])
tokenizer.pre_tokenizer = pre_tokenizers.Whitespace()
tokenizer.train_from_iterator(data, trainer=trainer)
tokenizer.enable_padding(pad_token="<pad>", pad_id=tokenizer.token_to_id("<pad>"))
tokenizer.enable_truncation(max_length=5)
print(tokenizer.encode(data[-1]).ids, tokenizer.padding)
This gives an output with len 5 and an explicit padding object
- In the other hand if we load our tokenizer in the PreTrainedTokenizerFast class and print the same thing like before.
from transformers import PreTrainedTokenizerFast
fast_tokenizer = PreTrainedTokenizerFast(tokenizer_object=tokenizer)
fast_tokenizer(data)
print(tokenizer.encode(data[-1]).ids, tokenizer.padding)
This gives an output with len > 5 and None in padding
Expected behavior
The expected behavior should be the same with tokenizer before loading it in the PreTrainedTokenizerFast wrapper. It should not impact the padding and the truncation part
cc @SaulLu
Hi @YBooks
Thank you very much for the detailed issue :hugs: !
I see that you have already proposed a fix that has been merged and that solves the problem you are pointing out. If you are happy with it, is it ok if we close this issue?
Hey @SaulLu Yes sure. My pleasure
@YBooks , @SaulLu , @sgugger can we reopen this issue, since https://github.com/huggingface/transformers/pull/18408 creates another one ?