best
best copied to clipboard
Unexpected result metrics for simple benchmarks
Observations
When running the fibonacci example on the getting started page locally the CLI outputs the following table:
┌─────────────────┬─────────────┬─────┬───────────────┬───────────────┐
│ Benchmark name │ Metric (ms) │ N │ Mean ± StdDev │ Median ± MAD │
├─────────────────┼─────────────┼─────┼───────────────┼───────────────┤
│ js-execution │ - │ - │ - │ - │
├─────────────────┼─────────────┼─────┼───────────────┼───────────────┤
│ └─ fibonacci 15 │ script │ 237 │ 0.097 ± 12.6% │ 0.095 ± 10.5% │
├─────────────────┼─────────────┼─────┼───────────────┼───────────────┤
│ └─ fibonacci 15 │ aggregate │ 236 │ 0.872 ± 16.2% │ 0.903 ± 5.5% │
├─────────────────┼─────────────┼─────┼───────────────┼───────────────┤
│ └─ fibonacci 15 │ paint │ 230 │ 0.032 ± 10.4% │ 0.031 ± 3.2% │
├─────────────────┼─────────────┼─────┼───────────────┼───────────────┤
│ └─ fibonacci 38 │ script │ 248 │ 0.077 ± 10.9% │ 0.080 ± 6.3% │
├─────────────────┼─────────────┼─────┼───────────────┼───────────────┤
│ └─ fibonacci 38 │ aggregate │ 238 │ 0.306 ± 7.0% │ 0.305 ± 4.9% │
└─────────────────┴─────────────┴─────┴───────────────┴───────────────┘
In the case of fibonacci example, the paint and aggregate metrics doesn't make much sense.
It would be great to pass to the benchmark block or the describe block the set of metrics the benchmark if interested by instead of gathering all the metrics by default.
Versions
- node:
10.16.0 - best:
4.0.0-alpha4
@pmdartus I would argue to be a global best configuration option, 1) because I think its easier to reason, 2) because it will be hard to change the way the test runs at "runtime" (or we will have to parse the js at build time and search for those options).
You are right, I think it will be really elegant to do that at the project level.