server icon indicating copy to clipboard operation
server copied to clipboard

vLLM/OpenAI Compatible Endpoint

Open Elsayed91 opened this issue 1 year ago • 5 comments

Is your feature request related to a problem? Please describe. vLLM backend works well and is easy to set up, compared to TensorRT which had me pulling my hair.

However it lacks the OpenAI compatible endpoint that ships with vLLM itself.

The /generate endpoint on its own requires work to setup for chat applications (that I honestly don't know how to do).

In essence, just by adopting vLLM triton instead of vLLM, you have to develop classes and interfaces for all these things.

Not to mention that LangChain has no LLM implementation and LlamaIndex's is a bit primitive, undocumented and bugs out.

Describe the solution you'd like Include vLLM's OpenAI compatible endpoint as an endpoint while using Triton.

Additional context Pros:

  • Better integration with Langchain (through ChatOpenAI) and LlamaIndex
  • Triton becomes orders of magnitude easier to setup, run and migrate to (i.e you don't have to rebuild your whole toolset to accommodate Triton)
  • Better out-of-the-box integration with a ton of tools in the market that integrate with OpenAI compatible endpoints (eg. Langfuse, Langsmith)

It would be wonderful if it existed as a feature for all backends, but for now, with vLLM's implementation as reference, maybe that is the best starting point.

https://github.com/vllm-project/vllm/blob/main/vllm/entrypoints/openai/serving_chat.py https://github.com/vllm-project/vllm/blob/main/vllm/entrypoints/openai/api_server.py https://github.com/npuichigo/openai_trtllm/tree/main

Elsayed91 avatar Mar 10 '24 14:03 Elsayed91

@Elsayed91 I filed a feature request to the team. DLIS-6323

lkomali avatar Mar 13 '24 21:03 lkomali

Not supporting openai style made me abandon it outright

gongyifeiisme avatar Mar 21 '24 08:03 gongyifeiisme

any update or progress on this ?

panpan0000 avatar Apr 16 '24 07:04 panpan0000

@panpan0000 , @Elsayed91 is improved integration with llamaindex / langchain the goal or is direct support?

Would support via the python in process api be sufficient or is c/ c++ implementation required?

nnshah1 avatar Apr 26 '24 05:04 nnshah1

@panpan0000 , @Elsayed91 is improved integration with llamaindex / langchain the goal or is direct support?

Would support via the python in process api be sufficient or is c/ c++ implementation required?

I don't quite understand what you mentioned ..sorry @nnshah1 this is a similar issue which may help to clarify https://github.com/triton-inference-server/server/issues/6583

panpan0000 avatar May 14 '24 08:05 panpan0000