js-framework-benchmark icon indicating copy to clipboard operation
js-framework-benchmark copied to clipboard

Feature request: There should be a scatter chart

Open sawanm9000 opened this issue 6 years ago • 8 comments

Bundle size on one axis and speed on the other.

The best framework would be the one that's the fastest and lightest.

Here's an example of a scatter chart:

sawanm9000 avatar Aug 11 '19 06:08 sawanm9000

@utopianknight I included this chart in my repo, hope you like it! https://luwes.github.io/sinuous/bench/results/speed-size.html

It pulls the data from this repo via https://rawgit.com/krausest/js-framework-benchmark/master/webdriver-ts/results.json

All feedback is welcome! Cheers

luwes avatar Sep 28 '19 18:09 luwes

@luwes how did you calculate the x axis? I notice in a few places implementations that are further left here are not further right like I'd expect.

All that being said it really illustrates how tight in size the vast majority of top contenders are. Even if you removed the size outliers and focused in it would be hard to pick out a trend. Usually these graphs would highlight performance to size tradeoffs and you'd be looking for a diagonal line to see which side of the norm libraries fall into. But in this case all libraries vary greater in size in that small range (because of how tight it is) compared to the more linear increase in performance it doesn't have the typical visual effect. There isn't much of a correlation. I think it might take to the top 80 or 100 before we saw a bit more of a progressive range.

ryansolid avatar Sep 28 '19 21:09 ryansolid

I noticed that too, I wonder how the average slowdown is calculated here https://rawgit.com/krausest/js-framework-benchmark/master/webdriver-ts-results/table.html

Could be rounding, I didn't round my numbers. Or maybe average vs median?

The calculation of the x axis is as follows:

It's 100 divided by the average of all the slowdowns (duration / fastest).

  1. First it gets the fastest library for each of the benchmarks. https://github.com/luwes/sinuous/blob/b349e936aaccb5ccb4760a4ba7ec81da5709c839/bench/results/speed-size.js#L60-L67
  2. Calculate all the slowdowns for each benchmark for every library.
  3. For each library make a sum of the slowdowns and divide by 9 since there are 9 performance related benchmarks.

luwes avatar Sep 28 '19 21:09 luwes

I see. Probably the difference between the arithmetic mean and the geometric mean. I believe that is what Stefan uses here. Instead of adding and dividing by 9. You would multiply them and then take the 9th root.. or:

(s1 * s2 * s3 * s4 * s5 * s6 * s7 * s8 * s9)**(1/9)

Geometric means normalizes outliers more which could explain the small discrepancies.

ryansolid avatar Sep 29 '19 02:09 ryansolid

Nice! It‘s indeed the geometric mean. I felt it makes more sense for a average of factors.

krausest avatar Sep 29 '19 03:09 krausest

This is so good. I like it. My only 2 suggestions are:

  1. Put the coordinates near the axes instead of near the dots while hovering over them.
  2. Filter by the number of Github stars.

Now we can even see patterns emerging, for example, why are there so many frameworks that are on the 40% avg. slowdown line regardless of their size? Something to do with the tests themselves maybe?

sawanm9000 avatar Oct 04 '19 08:10 sawanm9000

why are there so many frameworks that are on the 40% avg. slowdown line regardless of their size?

It's because size and performance have very little correlation here. Small != fast. Most popular libraries sit around that 40% line so they are most content, as long as they are competitive with their competitors to be there on performance and focus on the plethora of other things involved in building a framework.

Truth be told the numbers weren't always so close on the lower end. I think we are just seeing the results of Stefan's more powerful machine + Chromes work to make performance on average better rather than specific tricks. At this point, it's the efficiency of DOM techniques that are making a lot of the difference since that is where the cost is. You can almost look at the chart and guess which techniques each library is using. Like all the libraries that have inefficient row swap. It might even be more pronounced due to the way the average is being calculated on the scatter chart vs the table, but I can't say for certain.

Filter by the number of Github stars.

That'd probably take some leg work since the source material doesn't even have repo link backs. I think you will find the top right corner vacate pretty fast as you raise the star thresholds. The top 25 performing libraries save maybe Inferno, lit-html, are around or under 1000 stars. Has a lot more above 1000. But it's in that cluster in the bottom half where you are going to find most of your 10k+ libraries.

ryansolid avatar Oct 04 '19 09:10 ryansolid

Honestly it took me a while before I understood the chart. Usually, the best values are at coordinates 0,0. Size in bytes increases from top to bottom, that is really counter intuitive. Same for timings, usually lower is better, why is this the other way around?

Jogai avatar Aug 31 '20 19:08 Jogai