FastChat
FastChat copied to clipboard
Added llm-judge support for GPT-4o
trafficstars
Why are these changes needed?
The simple changes add GPT-4o support to the llm-judge. As GPT-4o is much faster & much cheaper compared to GPT-4 (+ also better in terms of quality), this poses to be a useful addition. The default judge of GPT-4 is not touched yet, but with the arg "--judge-model gpt-4o" the new GPT model can be used for the gen_judgment.py function.
Related issue number (if applicable)
I couldn't find an issue related to the addition. But since this is such a small addition, I don't think an issue is required.
Checks
- [ ✅ ] I've run
format.shto lint the changes in this PR. ("Your code has been rated at 9.03/10") - [ ✅ ] I've included any doc changes needed.
- [ not applicable ] I've made sure the relevant tests are passing (if applicable). ( I have looked through all the tests, but none of those seem applicable for the changes I added )