Fixed Length Pre-Tokenizer
Introduces a pre-tokenizer to split text in fixed length chunks (closes https://github.com/huggingface/tokenizers/issues/1697).
The method pre_tokenize could be more made more concise by creating a vector with indices first like so
let mut splits = Vec::new();
for chunk in char_positions.chunks(self.length) {
let start = chunk.first().map(|(i, _)| *i).unwrap_or(0);
let end = chunk.last().map(|(i, c)| i + c.len_utf8()).unwrap_or(text.len());
splits.push(normalized.slice(Range::Normalized(start..end))
.ok_or("Failed to slice normalized text")?);
}
but that would take a bit more memory, so I went for my approach instead.
Thanks for this. The code looks working, but I think it could be simplified quite a lot.
Is there any source/paper for trying to do fixed sized chunking ? Before adding anything to the library usually we try to make sure it's used in the wild and would benefits actual users of models (not necessarily researchers exploring new ideas, for this they can try out your branch or create their own pre_tokenizer directly in Python).
You're right, I simplified it along the lines of my initial comment.
I also asked the author of the issue whether this is a common approach in the literature or not (I'm not aware of it either). Should have probably clarified this before jumping on it ;)
according to the author it's used in DNA Transformer
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
I'll fix the CI and merge this, sorry for being slow!