lightllm icon indicating copy to clipboard operation
lightllm copied to clipboard

Inconsistent Output between LightLLM and Transformers Inference Library

Open Lvjinhong opened this issue 1 year ago • 2 comments

When specifying 'max new tokens', LightLLM's output consistently matches this maximum value. However, Transformers sometimes adjust according to the model itself, resulting in outputs shorter than the specified 'max new tokens'. I believe Transformers is correct in this approach. It's implausible to always generate output exactly matching the maximum 'max new tokens' value, as this would only lead to repetitive outputs.

Lvjinhong avatar Jan 19 '24 07:01 Lvjinhong

@Lvjinhong You can specify the stop token ID by setting the --eos_id xxx argument when starting the server or by using the stop_sequences parameter in the request parameters

hiworldwzj avatar Jan 22 '24 09:01 hiworldwzj

@Lvjinhong You can also check if your input is spliced with the correct prompt. lightllm doesn't splice prompts on inputs, while transformers usually splice prompts on inputs in their chat functions.

shihaobai avatar Jan 23 '24 08:01 shihaobai