add aggregation to rubric group (#408)
Description
Added configurable aggregation option in verifiers/rubrics/rubric_group.pyso RubricGroup.score_rollout() - can also average rewards/metrics when aggregation="mean". Expanded tests/test_rubric_group.py with coverage for the mean path and input validation as well!
Type of Change
- [ ] Bug fix (non-breaking change which fixes an issue)
- [x ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [ ] Documentation update
- [ ] Test improvement
Testing
- [ ] All existing tests pass when running
uv run pytestlocally. - [x ] New tests have been added to cover the changes
- Ran uv run pytest tests/test_rubric_group.py -v
Checklist
- [x ] My code follows the style guidelines of this project as outlined in AGENTS.md
- [x ] I have performed a self-review of my own code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have made corresponding changes to the documentation
- [x ] My changes generate no new warnings
- [ x] Any dependent changes have been merged and published
Additional Notes
Previous PR (#419) was closed for bundling validation logic; this keeps scope limited to aggregation to respect that feedback.
I'm not sure I'm totally sold on the necessity of this... Generally we try and avoid having too many different ways of doing the same thing, as well as avoiding explicit conditional logic which can get buried / be hidden from the user.
Weights are already configurable in rubrics for individual functions, and so you can accomplish the same thing by adjusting weights directly. For something simpler to set at the RubricGroup level, I think my preferred mechanism would be to allow an optional weights argument to RubricGroup which behaves similarly to how we already do it for reward functions. Would this work for your use case?
thanks for the feedback! using an optional weight arg actually sounds perfect. i'll make that change!