tokenizers icon indicating copy to clipboard operation
tokenizers copied to clipboard

Unused Unicode Character Filter

Open sanderland opened this issue 5 months ago • 2 comments

This PR adds a Unicode normalizer to the tokenizers library, enabling filtering of unused and private use code points based on Unicode properties. These characters are often artifacts from editing text with proprietary programs and are not useful for NLP tasks. Filtering them out can improve tokenizer quality.

The implementation covers the Rust core, Python bindings, and Node.js bindings, along with corresponding tests.

sanderland avatar Jul 23 '25 11:07 sanderland

Thanks for the great PR !!

You created 3 booleans. Shouldn't we maybe use a flag array so users can filter in/out any particular categories maybe ?

Are there no way to reuse any of the pre-existing dependencies maybe ?

https://en.wikipedia.org/wiki/UTF-8#Surrogates https://en.wikipedia.org/wiki/Private_Use_Areas

Narsil avatar Sep 04 '25 14:09 Narsil

Thanks for the great PR !!

You created 3 booleans. Shouldn't we maybe use a flag array so users can filter in/out any particular categories maybe ?

Are there no way to reuse any of the pre-existing dependencies maybe ?

https://en.wikipedia.org/wiki/UTF-8#Surrogates https://en.wikipedia.org/wiki/Private_Use_Areas

  • Simplified a bit and removed the surrogates option which is not used in utf8 anyway
  • Looked into using the existing dependencies, but could not get the unassigned filter working there

sanderland avatar Sep 07 '25 20:09 sanderland