cargo-criterion icon indicating copy to clipboard operation
cargo-criterion copied to clipboard

Chart throughput on parameterized benchmark page by default?

Open antifuchs opened this issue 6 years ago • 4 comments

I am using Criterion to benchmark a function that operates on a vector of elements, using something like:

const ALL: &'static [Width; 4] = &[Width::U8, Width::U16, Width::U32, Width::U64];
const ELTS: usize = 20;

fn encode_multiple_benchmark(c: &mut Criterion) {
    let id = "encode_multiple";
    let bm = ParameterizedBenchmark::new(
        id,
        |b, ref n| {
            b.iter(|| {
                let v = vec![n.sample() as u64; ELTS];
                v.fib_encode().expect("should be encodable")
            })
        },
        ALL,
    ).throughput(|_s| Throughput::Elements(ELTS as u32));
    c.bench(id, bm);
}

The benchmark overview page shows the duration that each iteration took, but that number is a bit useless on its own. The thing I'm really interested in over time is the throughput of that function, which is only given under "Additional Statistics" on the details page.

It would be really nice if the benchmark was configurable to show that throughput on the parameterized benchmark's overview page by default.

antifuchs avatar May 28 '18 09:05 antifuchs

Hey, thanks for trying Criterion.rs, and thanks for the suggestion.

Yeah, that's reasonable. We could probably skip adding configuration (at least for now) and assume that anyone who configures a throughput metric on their benchmarks at all is probably more interested in the throughput than the execution time.

I probably won't get around to implementing this right away, but pull requests would be welcome.

bheisler avatar Jun 01 '18 23:06 bheisler

I would be willing to try and tackle this since I'd like to have this feature too. Can @bheisler maybe give me a pointer to the relevant function that I need tu modify for this?

gz avatar Aug 09 '19 01:08 gz

I think you'd need to modify more than one function...

For now, let's scope this to just adding throughput charts to the per-benchmark reports. Reporting throughput on the summary reports raises a lot of complicated questions and edge cases (Would you want to have both throughput and execution time on the summaries? Should the violin plots show throughput instead of execution time? What if some of the benchmarks in a group have no throughput? What if they have different kinds of throughput?).

src/html/mod.rs is the entry point for generating the HTML reports. You probably want to edit measurement_complete to have it generate one or more new plots if there is a throughput (if measurements.throughput.is_some()). You'll need to convert the average iteration times in measurements.avg_times to throughput numbers and define the new plots in src/plot, probably in a new file. Depending on how you want to display these plots, you might also need to update the benchmark_report.html.tt template.

This will need to work with the custom measurements feature I've added to 0.3.0, so you'll need to build on top of the v0.3.0-dev branch. That will probably require a breaking change to the ValueFormatter trait, so it would have to be done before I release 0.3.0 or wait for 0.4.0.

Yeah, this isn't a trivial feature to add, partly because Criterion.rs' internal code isn't as clean as I'd like and partly because it interacts with some other features currently in development.

bheisler avatar Aug 10 '19 14:08 bheisler

Thanks that will be very helpful, I will try to look into these pointers to see what I can come up with on the 0.3 branch.

gz avatar Aug 12 '19 17:08 gz