David A. Wheeler
David A. Wheeler
Jesus M. Gonzalez-Barahona (Bitergia) has done a lot of work on measuring OSS projects; we should look further at his (their) work. We had an interesting conversation at the 2016...
Enable varying the weights inside a browser. Since we have no "truth values" to compare against, we've had to estimate weights using expertise, and people can always question that. A...
Report on _trends_ for FLOSS overall, in addition to identifying "projects that most need help". This would be of interest to a lot of people who aren't FLOSS developers. This...
Add analysis of language-level package managers; some components are highly depended on. An interesting example is the npm left-pad issue: http://www.theregister.co.uk/2016/03/23/npm_left_pad_chaos/
Per section 5.B of the paper: Perform static analysis on source code to determine the likely number of latent vulnerabilities (e.g., using Coverity scan, RATS, or flawfinder); measures such as...
Currently the results file reports the final risk index, but not the breakdown of how the score was derived. You can figure it out from the other data, but it...
Consider the following (from the paper section 5.B): Gather and analyze bug report processing (e.g., how long (on average) does it take to respond to a bug report, and how...
Consider counting how many packages depend on something (and possibly their popularity), to emphasize popular libraries. It may be that this is essentially captured in popularity counts, but perhaps not.
Consider the number of downstream-only patches. E.G., if a deb or rpm includes more than 5 patches which have not been accepted upstream, the package receives a point. Distros carry...
In a future version, consider adding "truck factor" ("hit by a bus factor") as a metric. See "What is the Truck Factor of Popular GitHub Applications? A First Assessment" https://peerj.com/preprints/1233v1.pdf...