llama.cpp icon indicating copy to clipboard operation
llama.cpp copied to clipboard

Tokenization Example

Open rozek opened this issue 2 years ago • 1 comments

First of all: thank you very much for the continuing work on llama.cpp - I'm using it every day with various models.

For proper context management, however, I often need to know how many tokens prompts and responses contain. There is an "embedding" example, but none for "tokenization".

This is why I made my own (see my own fork of llama.cpp)

It seems to work, but since I am no C++ programmer and, in addition, not a real AI expert, I hesitate to create a pull request.

Perhaps, somebody else may have a look at it or create a better example for the public...

Thanks for all your effort!

rozek avatar Apr 26 '23 17:04 rozek

I think test-tokenizer-0.cpp is a good example of a minimal tokenizer.

SlyEcho avatar Apr 27 '23 15:04 SlyEcho

This issue was closed because it has been inactive for 14 days since being marked as stale.

github-actions[bot] avatar Apr 09 '24 01:04 github-actions[bot]