Why doesn't this package provide a way to calculate the number of tokens in the prompt, suffix and messages parameters?
Describe the feature or improvement you're requesting
People often need to calculate the number of tokens in the prompt and suffix parameters of the completions endpoint or in the messages parameter of the new chat endpoint. We can use a package like tokenizers (from huggingface) or tiktoken, but it's really not clear which pre-trained model we should use to create the tokenizer or how exactly the number of tokens from these different parameters should be summed and compared against the model's context length.
So, why doesn't this package this simply provide the functionality of counting the number of tokens in the prompts, suffixes and messages or the functionality to check that the sum of these tokens doesn't exceed the model's context length?
Additional context
I would be happy to work on this in my free time, if you tell me exactly how this is done under the hood for each model.