Tracking `nvbench` benchmark results
Is there any interest in tracking the results from nvbench?
I'm considering adding an adapter for nvbench to my continuous benchmarking tool, Bencher: https://github.com/bencherdev/bencher
And I figured I should check in on whether there is already a preferred way to do this.
Hey @epompeii, thanks for reaching out!
I hadn't heard of bencher before, but it looks great! A continuous benchmarking tool is something we've often talked about wanting to build, but have never got around to it.
I'd say we're definitely interested! What all is required?
Thank you for the kind words @jrhemstad !
As for what's required, it depends if you want to self-host or use the hosted version.
To self-host, it's just two docker containers. One is the UI and one for the server + embedded DB. The DB can be set up to backup to S3 (this is the litestream docker image).
For the hosted version, you just have to sign up at https://bencher.dev and you should be good to go.
In either case, there is also the choice of compute. You can just use shared/GitHub Action runners. Depending on how you configure your thresholds, you can detect > ~50% performance regressions. If you want to be able to detect smaller regressions, then you would want to use a dedicated and preferably bare metal runner (like AWS Bare Metal). Eventually, I want to add bare metal runners as a feature to Bencher, so if you are interested in going this route, I would be more than willing to help with the effort.
I'm guessing we'd want to self-host.
We will have self-hosted Action runners coming online in the next few months that we'd use for our performance tracking.
Awesome!
I'm happy to help where I can to get things set up. My email is [email protected] or you are welcome to hop on our discord: https://discord.gg/yGEsdUh7R4 Please, feel free to ask me any questions or give candid feedback!
Hey @jrhemstad, I just wanted to follow up, how are the self-hosted Action runners coming along?