Unused Unicode Character Filter
This PR adds a Unicode normalizer to the tokenizers library, enabling filtering of unused and private use code points based on Unicode properties. These characters are often artifacts from editing text with proprietary programs and are not useful for NLP tasks. Filtering them out can improve tokenizer quality.
The implementation covers the Rust core, Python bindings, and Node.js bindings, along with corresponding tests.
Thanks for the great PR !!
You created 3 booleans. Shouldn't we maybe use a flag array so users can filter in/out any particular categories maybe ?
Are there no way to reuse any of the pre-existing dependencies maybe ?
https://en.wikipedia.org/wiki/UTF-8#Surrogates https://en.wikipedia.org/wiki/Private_Use_Areas
Thanks for the great PR !!
You created 3 booleans. Shouldn't we maybe use a flag array so users can filter in/out any particular categories maybe ?
Are there no way to reuse any of the pre-existing dependencies maybe ?
https://en.wikipedia.org/wiki/UTF-8#Surrogates https://en.wikipedia.org/wiki/Private_Use_Areas
- Simplified a bit and removed the surrogates option which is not used in utf8 anyway
- Looked into using the existing dependencies, but could not get the unassigned filter working there