ClickBench
ClickBench copied to clipboard
automate verification of modified benchmark results
There is currently a split in the ClickBench dataset between results that are submitted by vendors, and results that are re-run by maintainers. This is an artificial data integrity issue.
Instead, there could be a CI job which re-runs results based on which vendor's files were updated. If results were submitted by a vendor alongside their changes, these could be compared, and the job can fail if the results fall outside an acceptable margin of error.
Automating the result verification in this way could save maintainer time in the long run, and vendors are still able to submit results if they think the CI run diverges from expectations.