CTranslate2 icon indicating copy to clipboard operation
CTranslate2 copied to clipboard

Continuous batching

Open andreapiso opened this issue 1 year ago • 6 comments

Recently, a lot of benchmarks point to the fact that if you want to serve your models behind an API, continuous batching grants higher throughput and lower latency compared to static batching. Some examples of systems that implement continous batching:

  • text-generation-inference from huggingface: https://github.com/huggingface/text-generation-inference
  • vLLM (which also include an inference engine) https://github.com/vllm-project/vllm
  • Ray from the next 2.6 version

In order to enable continuous batching, it is necessary to be able to:

  1. add requests to an existing running batch, if there are enough resources to take it (compared to static batching where requests need to be submitted all together)
  2. remove a request early from the batch when it reaches the stop token (as opposed to returning all requests at the same time).

Is this concept compatible with CTranslate2 architecture? I am keen to build an inference engine on top of CTranslate2, would love to hear some thoughts around this before I deep dive into it.

andreapiso avatar Jul 06 '23 23:07 andreapiso

#1317

michaelfeil avatar Jul 07 '23 08:07 michaelfeil

@michaelfeil is this related? Yes, vLLM supports continuous batching, but I'm looking to understand if Ctranslate can be extended to support that, without using vLLM.

andreapiso avatar Jul 07 '23 08:07 andreapiso

  1. Currently it is not possible to add an entry to a batch that is already running. However, you could bufferize incoming requests and batch them together before calling CTranslate2. I think this is already good enough in many situations.
  2. This is already possible. There is a callback parameter to get tokens as soon as they are generated, and finished requests are removed from the batch.

guillaumekln avatar Jul 07 '23 08:07 guillaumekln

Yes, bufferize incoming requests and sending them together is what i meant for static batching.

Is 1. not possible today because of a difference in architecture between CT2 and HF Transformers, or is it possible in theory, but the mechanism has not been implemented?

andreapiso avatar Jul 07 '23 13:07 andreapiso

CT2 was not designed with the feature in mind so it is not trivial to implement it. But of course it is possible in theory.

guillaumekln avatar Jul 07 '23 14:07 guillaumekln