vllm icon indicating copy to clipboard operation
vllm copied to clipboard

Implementing Echo in OpenAI endpoint

Open andreamad8 opened this issue 2 years ago • 1 comments

Maybe not too urgent, but would be nice to have echo in the OpenAI interface, this can facilitate scoring (e.g., QA dataset)

andreamad8 avatar Jun 22 '23 00:06 andreamad8

As you mentioned, the main blocker for adding echo is to let the vLLM engine also compute the logits for prompt tokens. This is in our plan. However, feel free to contribute and I believe this should be a very good issue to get familiar with vLLM and understand the structure of vLLM better.

zhuohan123 avatar Jun 22 '23 15:06 zhuohan123

Hi guys,

I'm also interested in having the echo feature implemented for my specific use case. I would love to try to contribute to this issue. I've been using vLLM and it's been great so far! I'd appreciate any suggestions on how to get started. I noticed the model forward has everything needed for computing the logits for the prompt tokens, but I am not sure how to make it work with the engine. Is the idea to compute one token at a time also for the prompt tokens, or to return them alongside the first sampled token?

matheper avatar Aug 04 '23 21:08 matheper

As you mentioned, the main blocker for adding echo is to let the vLLM engine also compute the logits for prompt tokens. This is in our plan. However, feel free to contribute and I believe this should be a very good issue to get familiar with vLLM and understand the structure of vLLM better.

@zhuohan123 could you point me in the right direction of where to look to start implementing this feature?

winglian avatar Aug 10 '23 17:08 winglian

@andreamad8 looks like this was solved in #1504, this issue can be closed 😄

hmellor avatar Feb 02 '24 17:02 hmellor