h3 internal benchmarks
Problem
When we add or modify runtime logic to h3, we do not have an automated setup to check the performance implications of each change. Therefore we could introduce some regression (or improvements) without being aware of the impact of the changes.
Solution
Add an internal benchmarking setup to compare H3 against itself from version to version.
Additional Information
I do believe we should re-consider benchmarking against other frameworks once we reach 2.0 and h3 is more standalone friendly, as it would be a good way to gain new users.
Original Answer
Hi dear @Attacler. It probably is or at last has almost identical RPS performance.
I rather not to compare bare h3 with other frameworks because main goals of h3 are scalable performance by using composable utility architecture and portability. Often, you can see main benefits by using h3 in a real world scenario and the bare benchmarks are not even close to that because main logic is user code not framework.
checkout this for a comparison: https://github.com/fastify/benchmarks/
Also PR welcome to add a perf section to the docs with above explanations:)
Originally posted by @pi0 in https://github.com/unjs/h3/issues/293#issuecomment-1373368077
Testing h3 performance against itself, is a most have 💯
Moving towards multi runtime support and supporting both Node.js and Web, i think we might need two different sets of tests.
Testing against other frameworks using a bare RPS is really really pointless btw IMO. Just look at Fastify benchmarks and how frameworks like one from 6 years ago go to the top. Comparing to them is really pointless and we can easily go up and down by using a new URL contributor or not. Also with edge workers, things like "bundle size" and "startup timing" have to be considered which still could be only possible with a real (nitro/nuxt) deployment on those platforms comparing to another framework.
@pi0 For what it worth I think benchmark against other frameworks are useful for marketing purposes (and it might better to do it in an article or using a 3rd party website similar to https://krausest.github.io/js-framework-benchmark/current.html).
However, I entirely agree there's many things to take into consideration and just measuring raw RPS isn't really meaningful. We could also do it internally to measure where we can improve (startup time and bundle size would be interesting to compare)
web-api benchmarks added to main (bench:bun and bench:node).