transformer-models icon indicating copy to clipboard operation
transformer-models copied to clipboard

Allow usage of `tokenizedDocument` in BERT tokenization

Open bwdGitHub opened this issue 3 years ago • 0 comments

We would like to use these issues to gauge user interest.

The BERT tokenizer is intended as an identical reimplementation of the original BERT tokenization. However it is possible to replace the bert.tokenizer.internal.BasicTokenizer with a tokenizer using tokenizedDocument.

The belief is this should not affect the model too much as the wordpiece encoding is still the same, and it is these wordpiece encoded sub-tokens that are the input to the model.

Advantages of this are that tokenizedDocument is considerably faster than BasicTokenizer and may offer better integration with Text Analytics Toolbox functionality.

bwdGitHub avatar Dec 10 '21 17:12 bwdGitHub