Creating benchmarks to showcase speed and improvement over time
Description
Attempting to start discussion around benchmarking performance for various features across different environments/hardware. I see this as an easy way to deter naysayers, communicate performance improvements/progress, highlight areas in the need of the most optimization, etc.
I'd be happy to get something started, but was hoping for a minimal amount of guidance/direction before diving in so that an eventual PR has a higher chance of being pulled in.
Hi @csengineer13
That would be great thanks! 😄
I was thinking initially that we could simply refer to the benchmarks from https://github.com/bleroy/core-imaging-playground but that only really covers a resizing scenario so we should definitely create something more comprehensive.
Perhaps use that as a starting point and create a full benchmarking suite from there?
If you create a PR and mark it WIP we can guide as it is written. This seems so work well so far.
I definitely want this! :)
Our ImageSharp.Benchmarks.csproj project currently targets net461 only. It would be nice to have something that utilizes all those [***Job]-s. I'm really interested in mono numbers.
But it also needs to be configurable, because in certain cases I want to run benchmarks on one framework only to have a quick feedback.
Just a quick update -- Almost done with the initial setup. Expect a PR sometime this week.
Nice. Thanks!
Where are the results? Am I better off running the project myself?
@Svetomech I can make an in progress PR if you want to see how far I made it? Life happened, so I only have a couple of benchmarks in place, but it might make a nice starting place if someone else wants to carry the torch.
@dannyrb is this something you are still interested in?
Hi, I'm working on a project to follow performance evolution (and thus track regressions) during development, which seems to be what this issue is about. This is developed at siliceum, where I am both co-founder and CTO. We would be interested in getting in touch with you to discuss setting up benchmarks to be ran in CI for your project. (More details on our platform, calcite, here) I've seen that the project already has quite a few benchmarks, and we would mostly only need to write a small adapter (done on our side) to support Benchmark.net so that you can upload results to our platform.
We would provide this for free as we think the project is very interesting, and can also help us develop our platform and gather feedback. If needed, we might even be able to provide a stabilized linux runner. Think of it as sponsoring. 😃
While we can discuss this here, I think it would be easier if any of you (probably @antonfirsov or @dannyrb ?) could reach to me through my twitter account @lectem, or by mail ( [email protected] ) or any other medium.
@Lectem thanks for reaching out! Good question. Here are the things which are critical to make such an external contribution useful for us:
- No noise from CI infra. Normally I would run such benchmarks on a physical desktop machine. I'm not an expert on cloud stuff, but I don't see how is it possible to eliminate noise without at least having dedicated VM-s.
- BenchmarkDotNet integration.
- Almost no effort from the team. (We are very short on time, with very long backlog. I would prefer to keep our benchmarking manual, if it needs too much of our attention.)
- No or very low costs (our budget is close to zero).
Thanks for the quick and clear answer. I'll take a more in depth-look at the repository setup and see what can be done, but I think we can validate all the above points! We will draft something up as soon as possible, and get back to you once we have something. (However due to winter holidays I doubt we will be able to schedule it for this month)