StateOfJS-2020
StateOfJS-2020 copied to clipboard
Will interest in each framework be normalized?
I mentioned this in the other issue, but it was kinda off topic:
interest in a particular framework or library should never solely be a percentage of all participants because doing so doesn't normalize for the growth in individual communities.
The growth or interest should maybe be compared with the same framework from the previous year.
For example, if you quadruple react participants, but only double svelte, as a percentage of the whole, it looks like svelte is in decline.
Whenever we calculate interest for rankings we take the ratio between people who said they want to learn a technology vs people who said they don't. So it should be independent from the technology's size.
If the number of people favoring react is up 4x compared to anything else, and those same people say they aren't interested in X, either because they don't care, or don't have time, or are fanboys of the one thing they know, how does that help? With taking a ratio against "do not want to learn", this seems like inverse normalization - exaggerating confirmation bias
Do you have a specific example of that issue based on the data?
Is there anyplace else to download the data? I'm having a hard time with Kaggle.
- programmatic download requires python / api not documented (that I could find)
- account got locked after I tried posting to the community with this error:
oof
https://share.getcloudapp.com/NQuKg4le
thanks! <3
Given the data, I don't think there is a way to measure this, actually. Nothing indicates what people are using at the time of the survey, which means any negative experience someone has with something years ago could be affecting their choices now.
For example, if svelte had issues initially, but then a year later was amazing, the people who "heard of it and aren't interested"'s opinions would be out of date, and the value of the currently collected data is diminished. 🤔
So these questions measure sentiment of the survey audience, whomever they may be. 🤔
Is this resolveable?
Some poke on it: https://gist.github.com/lifeart/824e28ef54fc6909d1cbc120b0ef14ce https://twitter.com/vaier/status/1355202301493977089
@lifeart that's great! We used to have a similar chart actually:
https://2018.stateofjs.com/front-end-frameworks/ember/
But we were never quite happy with it, the calculations were always pretty complex and it was hard to make the chart meaningful and clearly explain it… But maybe we should revisit the concept like you did.
this is how results may looks like if we will count % from real count of devs, and later, will recalculate stats around constant count of devs for all frameworks (example - 500 devs per framework), using "true" per-year likes percentage
^ this plot fixes React dev's count fluctuations and "never heard about" deviations, and looks normalized to absolute values
I would need to look at your calculations in more details first, but maybe we could add them to our API directly. If you want to talk more about it you can join our Discord.