biome icon indicating copy to clipboard operation
biome copied to clipboard

📎 Rule Benchmarking

Open victor-teles opened this issue 1 year ago • 2 comments

Description

So that we can be more efficient when developing and modifying rules while ensuring that performance remains high. I would like to propose a new command-line that will individually benchmark each rule, following the current benchmark standards we already use.

The proposed command

cargo run -p xtask_bench --release -- --feature rule [ruleName]

and an alias

cargo bench_rule [ruleName]

Test data

For test data, we'll reuse current invalid.{ts,json,js,tsx,jsx} test files from rules.

Github PR comment action

As we've an comment action to run the benchmark for bench_analyzer, bench_cli,bench_formatter and bench_parser We can also introduce a new comment action for rules: !bench_rule [rule_name]

Output

Rule performance result

bench/rule.ts

Test result main count This PR count Difference
Total 49701 49701 0
Passed 48721 48721 0
Failed 980 980 0
Panics 0 0 0
Coverage 98.03% 98.03% 0.00%

victor-teles avatar Nov 23 '23 16:11 victor-teles

Thank you Victor for taking the lead.

Here's some thoughts that could help you:

  • there's no need to create new files, the invalid cases are perfect because they represent the apotheosis of rule: when a diagnostic is emitted
  • diagnostics and code fix are two separate actions, so it would be great to time/bench these separately
  • rule can emit multiple actions, we should bench all of them
  • there's no need to create x100 files, when you use criterion, it takes care of it (it runs the same "sample" many times)
  • we only need the report from criterion, there's no need for some other reports like the ones from the parsers; those have different semantics
  • I want to keep the GitHub actions output for now because we can reuse a lot of stuff; if in the future we want to use a different tool, we should move all the bench jobs together

ematipico avatar Nov 23 '23 21:11 ematipico

@ematipico

Thank you for the feedback! Your thoughts make sense to me, I'll update the task description

victor-teles avatar Nov 23 '23 22:11 victor-teles

I don't know much about Biome's internal benchmarking structure but it could be useful to use https://codspeed.io which comments on PRs with any performance changes that have happened.

net-tech avatar Dec 28 '23 18:12 net-tech

Yeah I plan to set it up in the next few weeks :)

ematipico avatar Dec 28 '23 18:12 ematipico