ci: add jest test file to benchmarks
This will help bench Jest linter rules. See conversation in #4787 for details.
Your org has enabled the Graphite merge queue for merging into main
Add the label “merge” to the PR and Graphite will automatically add it to the merge queue when it’s ready to merge. Or use the label “hotfix” to add to the merge queue as a hot fix.
You must have a Graphite account and log in to Graphite in order to use the merge queue. Sign up using this link.
CodSpeed Performance Report
Merging #4792 will not alter performance
Comparing don/chore/jest-benchmark (683f5f4) with main (4dd29db)
Summary
✅ 29 untouched benchmarks
🆕 4 new benchmarks
Benchmarks breakdown
| Benchmark | main |
don/chore/jest-benchmark |
Change | |
|---|---|---|---|---|
| 🆕 | lexer[coverageReport.test.ts] |
N/A | 32.6 µs | N/A |
| 🆕 | parser[coverageReport.test.ts] |
N/A | 162 µs | N/A |
| 🆕 | semantic[coverageReport.test.ts] |
N/A | 202.1 µs | N/A |
| 🆕 | transformer[coverageReport.test.ts] |
N/A | 340.8 µs | N/A |
CodSpeed Performance Report
Merging #4792 will not alter performance
Comparing
don/chore/jest-benchmark(683f5f4) withmain(4dd29db)Summary
✅ 29untouched benchmarks
🆕 4new benchmarksBenchmarks breakdown
Benchmark
maindon/chore/jest-benchmarkChange 🆕lexer[coverageReport.test.ts]N/A 32.6 µs N/A 🆕parser[coverageReport.test.ts]N/A 162 µs N/A 🆕semantic[coverageReport.test.ts]N/A 202.1 µs N/A 🆕transformer[coverageReport.test.ts]N/A 340.8 µs N/A
https://github.com/oxc-project/oxc/blob/d191823a0a150a5e8f40526ca52de6567cceeef0/tasks/benchmark/benches/linter.rs#L18-L24
And adding more test files will hurt CI time 😢
https://github.com/oxc-project/oxc/blob/d191823a0a150a5e8f40526ca52de6567cceeef0/.github/workflows/benchmark.yml#L162-L164
And adding more test files will hurt CI time 😢
This isn't directly related to this PR but I just want to express something on my mind.
I believe we can use an on-demand CI pass for heavy stuff. Instead of running some of the CI tasks on push we can use the action button or provide a command through @oxc-bot.
We can benchmark general stuff like before(on push) but provide a command for maintainers and the PR's author to run e2e benchmarks, oxlint-ecosystem, monitor-oxc, or other complicated tasks. We usually only need to run these once per review, Sometime we wouldn't even need to run these on trivial changes which saves a lot of trees(as suggested by @overlookmotel).
I've mentioned it here before https://github.com/oxc-project/backlog/issues/86 I haven't worked with Github's bot API but it shouldn't be too hard to implement. We can also use our own CI runner for these(a small cluster of docker containers). By using such a CI environment we can eliminate a lot of unpredictable variance caused by Github agents. Since with this setup, we can always run a benchmark in the same environment(and round robin between them as new requests queue). We might even be able to time some of our stuff instead of doing system-independent benchmarks.
I also noticed @overlookmotel is working toward filtering CI tasks so they only run on appropriate changes. That can help as well.
I wish Graphite's CI optimization had a better way to strategize tasks and where they should run.
Closing in favor of selective benchmark setup.