FastChat
FastChat copied to clipboard
Update inference.py
Add eos_token_id from the generation config file so that Llama3 can perform inference correctly.
Why are these changes needed?
The addition of eos_token_id from the generation config file to the stop_token_ids is crucial for ensuring correct inference termination in frameworks like Llama3. Typically, the eos_token_id used for model inference can be sourced from the tokenizer's config file and is usually identical to the one in the model's generation config. However, for certain models like Llama3, discrepancies between these two can prevent inference from stopping correctly. By explicitly specifying the eos_token_id from the generation config, our framework can handle inference more accurately and support a wider range of models.
Related issue number (if applicable)
N/A
Checks
- [x] I've run
format.shto lint the changes in this PR. - [x] I've included any doc changes needed.
- [x] I've made sure the relevant tests are passing (if applicable).