ChatRWKV
ChatRWKV copied to clipboard
Faster tokenizer
This one is faster
Benchmark 12700 tokens...
Encode 0.361 MB/s
Decode 12700000.0 MB/s
Encode 2.292 MB/s
Decode 12700000.0 MB/s
Encode 4.277 MB/s
Decode 12700000.0 MB/s
Unit test...
All OK
Benchmark 317500 tokens...
Encode 0.359 MB/s
Decode 17.167 MB/s
Encode 2.143 MB/s
Decode 17.687 MB/s
Encode 3.477 MB/s
Decode 26.242 MB/s
Unit test...
All OK
Not bad for python, I guess.