Tanvi Garg
Results
2
comments of
Tanvi Garg
Instead using old_tokenizer.tokens_from_list, you can substitute any custom tokenizer that does the correct input -> Doc conversion with the correct vocab for nlp.tokenizer: from spacy.tokens import Doc class _PretokenizedTokenizer: """Custom...
'List' is used to store input string/text