lucky_router
lucky_router copied to clipboard
Add benchmarks to CI
I'm not sure how we can do this, but there's a src/benchmark.cr
file. We should first, make sure this file is updated to fully benchmark all the different parts of the shard, and second, figure out a way to build a release version and run it. My thought is by adding that in, we can get a quick sense of a PR killing performance if the benchmark runs longer than X or something... (half baked idea)
Just thinking about this again... What if the benchmark look at some specific number like "300ms" as an example... We say the router should ALWAYS be less than this number, if it is, then we exit with 0 or whatever the 👍 code is. If it's more, than we exit with an error code.
lucky_router on releases/0.4.2 [!?] via 🔮 v0.36.1 took 2s
❯ ~/Development/crystal/crystal-1.0.0-1/bin/crystal build --release src/benchmark.cr
lucky_router on releases/0.4.2 [!?] via 🔮 v0.36.1 took 17s
❯ ./benchmark
Average time: 292.01ms
lucky_router on releases/0.4.2 [!?] via 🔮 v0.36.1 took 3s
❯ ./benchmark
Average time: 313.3ms
lucky_router on releases/0.4.2 [!?] via 🔮 v0.36.1 took 3s
❯ ./benchmark
Average time: 293.92ms
lucky_router on releases/0.4.2 [!?] via 🔮 v0.36.1 took 2s
❯ ./benchmark
Average time: 293.03ms
lucky_router on releases/0.4.2 [!?] via 🔮 v0.36.1 took 2s
❯ crystal build --release src/benchmark.cr
lucky_router on releases/0.4.2 [!?] via 🔮 v0.36.1 took 13s
❯ ./benchmark
Average time: 291.31ms
lucky_router on releases/0.4.2 [!?] via 🔮 v0.36.1 took 3s
❯ ./benchmark
Average time: 278.86ms
lucky_router on releases/0.4.2 [!?] via 🔮 v0.36.1 took 2s
❯ ./benchmark
Average time: 287.62ms
lucky_router on releases/0.4.2 [!?] via 🔮 v0.36.1 took 2s
❯ ./benchmark
Average time: 278.3ms
See, in this case, Crystal 1.0 seems to make the router just a little slower, and in one case, it would have failed to say "Hey, maybe we should see what's going on". As where Crystal 0.36.1 on average is about 10ms faster...
(NOTE: macOS does a binary check the first time you run one, so you need to run a few times to get a better sense of time)
(Also NOTE: the file in src/benchmark.cr
doesn't account for all the features the router supports, and may not even be the most efficient)
My ideal workflow is that a github action would run a benchmark and comment on the PR "The benchmark ran in X, it's X% faster/slower than the master branch"
I like that idea too!