djl icon indicating copy to clipboard operation
djl copied to clipboard

Expose vLLM logprobs in model output

Open CoolFish88 opened this issue 1 year ago • 3 comments

Description

vLLM sampling parameters include a richer set of values, among which logprobs has a wider utility.

When testing by adding the logpobs option to the request payload, the model output schema was unchanged {"generated text": "model_output"} suggesting it was not propagated to the output

Will this change the current api? How?

Probably by enriching the output schema.

Who will benefit from this enhancement?

Anyone who wants logprobs extracted from model predictions.

References

  • list known implementations This thread provides a starting point for tackling this issue.

CoolFish88 avatar Oct 01 '24 20:10 CoolFish88

@sindhuvahinis

frankfliu avatar Oct 02 '24 04:10 frankfliu

Found this while looking into CouldWatch logs:

The following parameters are not supported by vllm with rolling batch: {'max_tokens', 'seed', 'logprobs', 'temperature'}

CoolFish88 avatar Oct 02 '24 09:10 CoolFish88

What is the payload you are using to invoke the endpoint?

We do expose generation parameters that can be included in the inference request. Details are in https://docs.djl.ai/master/docs/serving/serving/docs/lmi/user_guides/lmi_input_output_schema.html.

We have slightly different names for some of the generation/sampling parameters - our API unifies different inference backends like vllm, tensorrt-llm, huggingface accelerate, and transformers-neuronx.

If you want to use a different API schema, we provide documentation on writing your own input/output parsers https://docs.djl.ai/master/docs/serving/serving/docs/lmi/user_guides/lmi_input_output_schema.html#custom-pre-and-post-processing.

We also support the OpenAI chat completions schema for chat type models https://docs.djl.ai/master/docs/serving/serving/docs/lmi/user_guides/chat_input_output_schema.html.

siddvenk avatar Oct 02 '24 15:10 siddvenk