Rework `beluga_benchmark` using LAMBKIN
Proposed changes
Precisely what the title says.
Type of change
- [ ] 🐛 Bugfix (change which fixes an issue)
- [x] 🚀 Feature (change which adds functionality)
- [x] 📚 Documentation (change which fixes or extends documentation)
Checklist
- [x] Lint and unit tests (if any) pass locally with my changes
- [ ] I have added tests that prove my fix is effective or that my feature works
- [x] I have added necessary documentation (if appropriate)
- [x] All commits have been signed for DCO
FYI @glpuga this needs some love still but the bulk of it is there.
@hidmic should we should reactivate or drop this.
I tend lean on the second option, since as long as we publish results, and replicability is possible through the lambkin repository, I don't really see a lot of advantage to having a beluga benchmarks package.
@hidmic should we should reactivate or drop this.
I tend lean on the second option, since as long as we publish results, and replicability is possible through the lambkin repository, I don't really see a lot of advantage to having a beluga benchmarks package
Ideally, neither. We've discussed the implications of bringing LAMBKIN here and it's not great, yet developer experience for running Beluga benchmarks is terrible. It makes following a data driven development path extremely rough.
We are discussing @LaBruma how and where to go from here.
Ideally, neither.
I meant, having a 12 months old paused PR is not great marketing. Regardless of what we decide to do about benchmarking, we should tilt this PR one way or the other.
I meant, having a 12 months old paused PR is not great marketing. Regardless of what we decide to do about benchmarking, we should tilt this PR one way or the other.
Fair enough. This won't do as-is anyways.