lm-evaluation-harness
lm-evaluation-harness copied to clipboard
New benchmark: CaselawQA
Hi,
New task contribution for the CaselawQA benchmark for legal text annotation, introduced in the Lawma: The Power of Specialization for Legal Tasks paper appearing at ICLR2025.
Thank you for the PR! Your paper doesn't seem to specify the evaluation implementation used in the paper. Is this the official implementation? If not, have you validated that models behave the same on this as they do in the implementation used in the paper?
Yes, this is the official implementation. We will specify this in the paper for the camera-ready version, coming soon.
Great. Can you run the precommit hook to address the failing tests?
Changes:
- Removed .yaml extension from the default templates
- Renamed
_tinysubtask to_1kwhich is more clear - Made CoT evaluation the dafault, rather than MMLU-style directQA, since newer model >= 3B params perform better with CoT in this benchmark
- Run the precommit hook
One question: our benchmark aggregates over 260 different annotation problems. While it provides a measure of overall model performance, researchers might want to evaluate models' accuracy on specific annotation problems, say caselawqa_sc_adminaction. I've implemented these tasks on a different branch, see here. In practice, these tasks only differ in their dataset_name. However, it might be unreasonable to "spam" the lm_eval library by implementing each of these 260 sets of annotation problems as a distinct task. Thoughts?
@baberabb Can you advise on subtask implementation? I'm not sure if we currently support it, but one thing that comes to mind is that it could be helpful to be able to algorithmically iterate over subtasks and report subtask score when requested by a user.
Right now we do expect each individual sub-task to have its own config, but it's worth thinking about having the option to create them programmatically at runtime, esp. if they all share the same base config with a minor differences. The number of configs are getting a bit unwieldy and we need to parse them all at startup.
@RicardoDominguez Can you add a link to your branch in the readme? I think that would be helpful for those users who want to make use of it.
Also if you could add a entry to lm_eval/tasks/README.md describing your benchmark in a sentence like all the other tasks!