optimum
optimum copied to clipboard
Use evaluators in the benchmark scripts
What does this PR do?
This is part of an effort on my end to simplify the benchmark scripts. As much of the logic of the evaluation using pipelines was moved to subclasses of Evaluator in evaluate, we make use of them following the 0.2.0 release, see https://huggingface.co/docs/evaluate/package_reference/evaluator_classes
In a following PR, we could make use of them in the example scripts as well.
PRs will follow after this one to:
- Have an option after a benchmark to save the model with config and results.
- Dissociate backends (do not run pytorch and onnxruntime at the same time).
- Have a tutorial in the documentation on how to use the benchmark scripts.
- Ideally, I would like to support at some points other backends than ONNX Runtime. This would involve building pipeline wrappers just like for
ORTModel.
Before submitting
- [x] Did you write any new necessary tests?
The documentation is not available anymore as the PR was closed or merged.