taffy icon indicating copy to clipboard operation
taffy copied to clipboard

Create more robust benchmarks

Open alice-i-cecile opened this issue 3 years ago • 5 comments

Before tackling more serious performance changes, I would like a wider array of benchmarks, each focused on either a realistic scenario or a particular way we could stress the algorithm.

alice-i-cecile avatar Jun 10 '22 16:06 alice-i-cecile

One option to generate bigger benchmarks would be to use seedable RNG to pseudo-randomly generate big (but reproducable) notes that can be benchmarked.

TimJentzsch avatar Jun 10 '22 20:06 TimJentzsch

The benchmarks for yoga appear to be here. I'd love to do a head-to-head.

alice-i-cecile avatar Jun 11 '22 17:06 alice-i-cecile

I wonder if it would be useful to setup benchmarks for each of the proposed ui-"archetypes" listed here: https://github.com/bevyengine/bevy/issues/1974

It could inform us of more detailed "what type of layout suffers what kind of bottlenecks".

Though I'm not fully sure how to best go about setting something like that up.

Weibye avatar Jun 18 '22 12:06 Weibye

It's a good idea, but I think best left to later. If we can produce real layouts in upstream applications like bevy, we can test them in practice rather than trying to make a best guess.

alice-i-cecile avatar Jun 18 '22 12:06 alice-i-cecile

Aye, good point

Weibye avatar Jun 18 '22 12:06 Weibye