aibrix icon indicating copy to clipboard operation
aibrix copied to clipboard

[router] Use string instead of token ids

Open gaocegege opened this issue 10 months ago • 6 comments

Ref https://github.com/aibrix/aibrix/pull/641#discussion_r1947978897

Currently, we use token IDs to support prefix cache-aware routing, which requires encoding first. This introduces several microseconds of latency to the requests, and the benefits aren't substantial.

In Q&A scenarios, the input string could be equivalent to token IDs. For RAG scenarios, we might benefit from something like CacheBlend for better performance rather than solely relying on token IDs.

Therefore, I propose using strings in the router.

gaocegege avatar Feb 14 '25 04:02 gaocegege

Introducing tokenization brings some complexity on tokenizer managed (if we want every model uses their own tokenizer) as well. We need to consider the benefits and at least make this part configurable now.

Jeffwan avatar Feb 14 '25 18:02 Jeffwan

Cross posting my comments here for future consideration


We can make it pluggable. It should be system-wide variable which shouldn't be changed during the runtime. Otherwise, it will mess up all the cache.

I think TokenizeInputText shouldn't be in each routing algorithm implementation. Currently, it is done in each Route function. It can be decoupled and done in common execution path (somewhere in gateway.go) before the Route.

Tokenization itself has two minor issues

overhead (not sure before testing) debugging with the raw text is easier than looking at token ids. (so I used Detokenization on my side when I debugged my routing implementation. I am not sure these are critical enough to support different ways of input embedding (raw string, tokenization method 1, tokenization method 2, etc)

gangmuk avatar Feb 20 '25 21:02 gangmuk

My previous proposal to move tokenizer to the gateway was wrong. I missed the fact that the tokenization is only being used in prefix aware routing. I measured the tokenizer overhead and it is non-negligible. It is actually much higher than microsecond latency added. It is 50ms - 100ms (the overhead will depend on the tokenizer library but still it will be there). I don't see there is a fundamental reason to use tokenizer and it makes sense to avoid tokenizer processing overhead by not tokenizing it.

gangmuk avatar Feb 28 '25 01:02 gangmuk

To be honest, I’m not entirely clear on the benefits of the token ID-based approach. Could someone shed some light on its advantages or explain why it’s used? Personally, I think we could consider eliminating it and transitioning to a string-based approach instead (No need to keep two implementations)

gaocegege avatar Feb 28 '25 02:02 gaocegege

@gaocegege Originally, I think the token based solution could be more aligned with the "page" tokens in vLLM and chunk by chunk alignment would be tidy comparing to two different approaches(tokens vs character/string). We use similar token based approach inside the engine for prefix cache but things become different on the gateway side, the overhead is too large. in this case, we should fully get ride of it.

Jeffwan avatar Feb 28 '25 05:02 Jeffwan

Make sense. Thanks for the explanation.

gaocegege avatar Feb 28 '25 07:02 gaocegege