Create more robust benchmarks
Before tackling more serious performance changes, I would like a wider array of benchmarks, each focused on either a realistic scenario or a particular way we could stress the algorithm.
One option to generate bigger benchmarks would be to use seedable RNG to pseudo-randomly generate big (but reproducable) notes that can be benchmarked.
The benchmarks for yoga appear to be here. I'd love to do a head-to-head.
I wonder if it would be useful to setup benchmarks for each of the proposed ui-"archetypes" listed here: https://github.com/bevyengine/bevy/issues/1974
It could inform us of more detailed "what type of layout suffers what kind of bottlenecks".
Though I'm not fully sure how to best go about setting something like that up.
It's a good idea, but I think best left to later. If we can produce real layouts in upstream applications like bevy, we can test them in practice rather than trying to make a best guess.
Aye, good point