FastChat
FastChat copied to clipboard
Add option to set the seed for inference
For reproducibility its generally recommended to be able to set a seed for the prompt generation. This adds the option to the OpenAI-esque API endpoint and the model.
Note I am not 100% sure that setting the seed for torch is
- enough
- not interfering with potential parallel executions
@nielstron Could you rebase to the latest main branch?
Could you rebase to the latest main branch?
Done
closed due to inactivity. Feel fee to rebase and reopen.