orion icon indicating copy to clipboard operation
orion copied to clipboard

Add benchmarks endpoints to Orion Web API

Open notoraptor opened this issue 1 year ago • 1 comments

Description

Hi @bouthilx ! I don't know where's the best place to report this work, so I made a PR here.

This PR adds new Orion Web API entries to get benchmarks.

The PR is based on your own branch ( https://github.com/bouthilx/orion/tree/feature/benchmark_webapi ) rebased with develop branch and extended with few commits to fix issues.

Changes

  • Add new entries to Orion Web API:
    • /benchmarks: get available benchmarks
    • /benchmarks/:name: get given benchmark
  • Replace parameter task_num with repetitions

Checklist

Tests

  • [ ] I added corresponding tests for bug fixes and new features. If possible, the tests fail without the changes
  • [ ] All new and existing tests are passing ($ tox -e py38; replace 38 by your Python version if necessary)

Documentation

  • [ ] I have updated the relevant documentation related to my changes

Quality

  • [ ] I have read the CONTRIBUTING doc
  • [ ] My commits messages follow this format
  • [ ] My code follows the style guidelines ($ tox -e lint)

notoraptor avatar Sep 07 '22 05:09 notoraptor

Hi @bouthilx ! I added a supplementary commit about tests: https://github.com/notoraptor/orion/commit/cd0617cf3d92d5aec33e19f1e90a2a3005c46733

There was a TODO about adding test bad task, bad assessment and bad algorithm (no algorithm). But there was already a test for bad task, a test for bad assessment and a test for bad algorithm (see tests just after the TODO). So, I just added a test for bad algorithm - no algorithm, and I removed the TODO comments.

There is one remaining TODO from your code here, but I don't know if it must be resolved in this PR: https://github.com/notoraptor/orion/blob/feature/benchmark_webapi_rebased/src/orion/benchmark/init.py#L105

notoraptor avatar Sep 26 '22 18:09 notoraptor

TODO: Check coverage

notoraptor avatar Nov 23 '22 18:11 notoraptor