encoder-agnostic-adaptation
encoder-agnostic-adaptation copied to clipboard
Selection of the special tokens
Hi, according to the preprocess.py
file, you choose the special tokens as follows,
tgt_bos = '<|endoftext|>'
tgt_eos = '\u0120GDDR'
tgt_pad = '\u0120SHALL'
tgt_unk = '\u0120RELE'
src_pad = '\u0120SHALL'
src_unk = '\u0120RELE'
In the huggingface tokenizer implementation, they use '<|endoftext|>'
for all these special tokens. Is there any reason to use other tokens in the vocab as special tokens? What if these tokens appear in the dataset after bpe encoding?
Thanks