transformers
transformers copied to clipboard
FastTokenizer for LLaMa
Feature request
FastTokenizer support for LLaMa sentencepiece tokenizer.
Motivation
The offset_mapping is only available in FastTokenizer, it would be useful if there's support for this.
Your contribution
I have tried using existing sentencepiece based model as replacement. However hf conversation code means we are missing the byte fallback support
The sentencepiece tokenizer that you are converting to a fast tokenizer uses the byte fallback option which is not implemented in the fast tokenizers
Which means out of vocabulary tokens are simply mapped to
Let's maybe wait for the LLaMa PR to be merged first?
it is fix on tokenizers
https://github.com/huggingface/tokenizers/pull/1183
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.