Scaling the paper list
A call for papers doesn’t scale well and biases any data generated because of self selection.
Solutions:
-
Short term: Centralise submitted papers in a single master paper list
- create a single submission repository?
- Ability to sync appropriately flagged submissions to individual reprohack repositories?
-
Medium term: Make use of frameworks aggregating information on code and data availability
- Zenodo compendia list maintained by @nuest : https://zenodo.org/communities/research-compendium/search?page=1&size=20
- Curate Science: a platform for researchers to label and link the transparency and replication of their research.
- Papers with code: platform Machine Learning papers, their code and evaluation tables.
-
Longer-term: Ability to sample the literature for papers that should be reproducible. (according to journal policy etc)
Suggestion by @bast which is probably more relevant here:
We could try that participants can nominate/submit papers, not only the authors (it is easier for us to reach participants than authors) but it would be good if we ask authors for an agreement to use their paper. If this is a good idea, how would this work in practice? On GitHub? Or outside of GitHub first?