encoder-agnostic-adaptation icon indicating copy to clipboard operation
encoder-agnostic-adaptation copied to clipboard

Selection of the special tokens

Open ShuyangCao opened this issue 5 years ago • 0 comments

Hi, according to the preprocess.py file, you choose the special tokens as follows,

tgt_bos = '<|endoftext|>'
tgt_eos = '\u0120GDDR'
tgt_pad = '\u0120SHALL'
tgt_unk = '\u0120RELE'
src_pad = '\u0120SHALL'
src_unk = '\u0120RELE'

In the huggingface tokenizer implementation, they use '<|endoftext|>' for all these special tokens. Is there any reason to use other tokens in the vocab as special tokens? What if these tokens appear in the dataset after bpe encoding?

Thanks

ShuyangCao avatar Oct 08 '19 20:10 ShuyangCao