Collect Explanations from Human Judges to seed LLM Judge
Is your feature request related to a problem? Please describe. LLM judges, or "Synthetic Judge" as Doug Rosenoff refers to them, require some explanation for why a evaluation scored the way it did.
Today we only ask for explanations on a "I can't judge" response from a human judge. We need to also make it possible to get explanations from our jduges.
Describe the solution you'd like Provide an opt int switch ont he judgement UI that flips on the explantions, and then leverages the keyboard to allow you to rate, and then tab to the explantion, and then tab to submit. Disables the keyboaard Rate and submit.
Describe alternatives you've considered Thought about a required switch at the book level, but still need to figure out the workflow.
Additional context Anything else @david-fisher @sstults ?
Move the "Judge Later" and "I can't tell" buttons to the scoring buttons row too!
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Please do advocate for this issue and ideally submit a patch to get the attention of the maintainers!