lm-evaluation-harness icon indicating copy to clipboard operation
lm-evaluation-harness copied to clipboard

New benchmark: CaselawQA

Open RicardoDominguez opened this issue 9 months ago • 8 comments

Hi,

New task contribution for the CaselawQA benchmark for legal text annotation, introduced in the Lawma: The Power of Specialization for Legal Tasks paper appearing at ICLR2025.

RicardoDominguez avatar Feb 26 '25 10:02 RicardoDominguez

CLA assistant check
All committers have signed the CLA.

CLAassistant avatar Feb 26 '25 10:02 CLAassistant

Thank you for the PR! Your paper doesn't seem to specify the evaluation implementation used in the paper. Is this the official implementation? If not, have you validated that models behave the same on this as they do in the implementation used in the paper?

StellaAthena avatar Feb 26 '25 22:02 StellaAthena

Yes, this is the official implementation. We will specify this in the paper for the camera-ready version, coming soon.

RicardoDominguez avatar Feb 27 '25 00:02 RicardoDominguez

Great. Can you run the precommit hook to address the failing tests?

StellaAthena avatar Feb 28 '25 14:02 StellaAthena

Changes:

  • Removed .yaml extension from the default templates
  • Renamed _tiny subtask to _1k which is more clear
  • Made CoT evaluation the dafault, rather than MMLU-style directQA, since newer model >= 3B params perform better with CoT in this benchmark
  • Run the precommit hook

RicardoDominguez avatar Mar 07 '25 14:03 RicardoDominguez

One question: our benchmark aggregates over 260 different annotation problems. While it provides a measure of overall model performance, researchers might want to evaluate models' accuracy on specific annotation problems, say caselawqa_sc_adminaction. I've implemented these tasks on a different branch, see here. In practice, these tasks only differ in their dataset_name. However, it might be unreasonable to "spam" the lm_eval library by implementing each of these 260 sets of annotation problems as a distinct task. Thoughts?

RicardoDominguez avatar Mar 07 '25 14:03 RicardoDominguez

@baberabb Can you advise on subtask implementation? I'm not sure if we currently support it, but one thing that comes to mind is that it could be helpful to be able to algorithmically iterate over subtasks and report subtask score when requested by a user.

StellaAthena avatar Mar 10 '25 16:03 StellaAthena

Right now we do expect each individual sub-task to have its own config, but it's worth thinking about having the option to create them programmatically at runtime, esp. if they all share the same base config with a minor differences. The number of configs are getting a bit unwieldy and we need to parse them all at startup.

@RicardoDominguez Can you add a link to your branch in the readme? I think that would be helpful for those users who want to make use of it.

Also if you could add a entry to lm_eval/tasks/README.md describing your benchmark in a sentence like all the other tasks!

baberabb avatar Mar 11 '25 19:03 baberabb