tokenizers icon indicating copy to clipboard operation
tokenizers copied to clipboard

Train tokenizer on integer lists, not strings

Open rteehas opened this issue 3 months ago • 6 comments

Hi,

I was hoping to train a BPE tokenizer, but in my case I have lists of integers rather than strings. I'd essentially like to apply the merging rules to adjacent integers in these lists, rather than to subword characters. Is there a straightforward way to do this? The current setup seems to require strings.

rteehas avatar Mar 16 '24 19:03 rteehas

Bumping this, as this would make the usage of the lib more easy and straightforward for modalities other than text, e.g. molecules, DNA, music.

In MidiTok we basically map each integer to a byte to "bypass" this limitation. But this is not straightforward and adds overhead.

Edit: also it can only scale up to the number of unicode characters.

Natooz avatar Mar 18 '24 10:03 Natooz

Second this. I'm training tokenizers on malware bytes. At the moment, I have to map bytes to utf8 characters before sending through the tokenizers library. The tokenizers should work on any sequence, not just strings.

lkurlandski avatar Mar 27 '24 19:03 lkurlandski

@Narsil @ArthurZucker how difficult do you estimate this?

Natooz avatar Apr 12 '24 09:04 Natooz

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

github-actions[bot] avatar May 13 '24 01:05 github-actions[bot]