FasterTransformer icon indicating copy to clipboard operation
FasterTransformer copied to clipboard

when fastertransformer support continuous batching and PagedAttention ?

Open ppppppppig opened this issue 1 year ago • 9 comments

From this article, I learned that continuous batching and PagedAttention greatly improve the inference performance of large models. I would like to know if fastertransformer has plans to support these two features.

ppppppppig avatar Jun 30 '23 10:06 ppppppppig

I both use FT+Tritonserver, TGI and vLLM, the vllm iterative-token-level batching throughtoutput is obviously large than request-level batching

hudengjunai avatar Jul 07 '23 06:07 hudengjunai

The FastServe paper is discuss this promblem.[FasterServe](Fast Distributed Inference Serving for Large Language Models)

hudengjunai avatar Jul 07 '23 06:07 hudengjunai

Following

sfc-gh-jhilgart avatar Jul 13 '23 03:07 sfc-gh-jhilgart

Following

gttiankai avatar Jul 18 '23 09:07 gttiankai

have any body tested vllm through output compare with fastertransformer?

lucasjinreal avatar Jul 21 '23 06:07 lucasjinreal

Based on FasterTransformer, we have implemented an efficient inference engine - TurboMind

  • It supports llama and llama-2
  • It modeled the inference of a conversational LLM as a persistently running batch whose lifetime spans the entire serving process named as "persistent batch", which is like continuous batching This document presents the architecture in more detail.

lvhan028 avatar Jul 25 '23 04:07 lvhan028

Based on FasterTransformer, we have implemented an efficient inference engine - TurboMind

  • It supports llama and llama-2
  • It modeled the inference of a conversational LLM as a persistently running batch whose lifetime spans the entire serving process named as "persistent batch", which is like continuous batching This document presents the architecture in more detail.

看了下您给的文档,persistent batch这样记住多轮对话的kv确实能够有效提升对话过程的推理速度。但是感觉跟continuous batching还不太一样,我理解continuous batching是指当对一个batch 请求进行推理时,新来了一个请求,这个请求无需等待该batch所有请求完成,而是当该batch有完成了足够的请求后,直接和该batch中未完成的请求一起进行推理。

ppppppppig avatar Jul 25 '23 09:07 ppppppppig

Based on FasterTransformer, we have implemented an efficient inference engine - TurboMind

  • It supports llama and llama-2
  • It modeled the inference of a conversational LLM as a persistently running batch whose lifetime spans the entire serving process named as "persistent batch", which is like continuous batching This document presents the architecture in more detail.

看了下您给的文档,persistent batch这样记住多轮对话的kv确实能够有效提升对话过程的推理速度。但是感觉跟continuous batching还不太一样,我理解continuous batching是指当对一个batch 请求进行推理时,新来了一个请求,这个请求无需等待该batch所有请求完成,而是当该batch有完成了足够的请求后,直接和该batch中未完成的请求一起进行推理。

Request in the queue will join the batch as long as there are free batch slots in the persistent batch

lvhan028 avatar Jul 28 '23 00:07 lvhan028

FasterTransformer development has transitioned to TensorRT-LLM. Continuous batching (inflight-batching) and PagedAttention are supported in TensorRT-LLM. Please take a try.

byshiue avatar Oct 20 '23 07:10 byshiue