Proposal: crowdsource ontology reviews using ManuBot system
For background on ManuBot, see: https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1007128
This allows large teams to collaboratively edit a document in markdown using standard GitHub workflows, where all contributions are both transparent and vetted via PRs. There are built in features to make things like citations easy.
(note that of course, we already using GH workflows for editing metadata on the OBO site, and this has worked out really well).
The source of the ManuBot paper itself is here: https://github.com/greenelab/meta-review/blob/master/content/02.main-text.md
Proposal:
Use the ManuBot system to crowd-source "living document" reviews of OBO ontologies. Note this would be more general than the current (stalled) review process which looks at principle-conformance but not use cases, domain coverage, ease of use by bioinformaticians, etc. This could be done via N github repos, or one github repo housing all reviews, with one 'chapter' per review, compiled into a mega-review doc. We would set up a github team with permission to merge PRs (inviting outside experts).
The markdown docs could be seeded automatically to provide a standard outline, summary table of ontology contents and metadata, Jupyter notebooks for visualizing aggregate stats, etc. If we had resources we could have plugins so that latest ontology metadata and stats are always included. Alternately this could be done by PRs from a bot, but that may be fiddlier.
These reviews could then be linked from each ontology's page.
Challenge: how to organize fairly? Should ontology developers or their colleagues have the ability to merge PRs on their own ontology? There are different ways to set this up, but however it is done, everything is transparent. It may be a failure and there may be disagreement about what content is appropriate to merge in but even this will be an informative exercise.
Challenge: will people have time to contribute? Not clear, but we assume that ontologies that get more use will attract more contributors as they will have the expertise, so the only the most used ontologies will have reviews in some kind of "finished" state, but this in itself is useful. If people are unhappy with lack of contributions they could petition members of their user community to write.
Variant of proposal: we provide a system whereby ontology providers can set up their own manubot-ready repo (perhaps by seeding via ODK), they would have control over PRs, which may be more comfortable for them. This may be useful for providing an easy way for people to write papers for their ontologies, and would give leverage to put in things like minimal information on reporting on an ontology (MIRO) cc @matentzn.
There is no reason to limit this to ontologies, a similar system could be used for data resources in general. Imagine a collaborative NAR database issues. Existing projects like http://reusabledata.org have had success using transparent github editing of metadata about resources, this would be a natural outgrowth.
cc @dhimmel @vsmalladi @cgreene
See also: proposal to use JOSS #256
@cmungall I think a "mega-review" of all OBO ontologies is a great idea. It would help users quickly learn about available ontologies.
I think Manubot would work well to orchestrate the mega-review. Especially if you are making a single review manuscript of all ontologies (rather than separate manuscripts for each ontology). We've had success using Manubot for large-scale collaborations like this, specifically the Deep Review led by @cgreene and @agitter.
My sense of a good workflow would be:
- set up a Manubot manuscript with guidelines for what each ontology section should contain.
- establish some individuals to be maintainers (responsible for reviewing and merging pull requests). They should be as impartial as possible to the PRs they are responsible for reviewing and merging.
- Solicit contributions of ontology sections from the creators of each ontology (or anyone else who wants to prepare sections).
- Review contribution PRs. Review can come from maintainers as well as other participants. For example, we should encourage folks associated with ontology A to not only propose a section on ontology A, but also to review proposed sections on ontologies B, C, and D.
- At some future point in time, you may decide to publish the current state of the document in a journal like JOSS, NAR, etcetera. However, the Manubot version can be a living document for as long as it makes sense to keep updating it.
The markdown docs could be seeded automatically to provide a standard outline, summary table of ontology contents and metadata, Jupyter notebooks for visualizing aggregate stats, etc.
I am not sure exactly what you are referring to here, but it sounds like you may want some statistics or visualizations continuously computed. It is possible to template a Manubot manuscript, such that statistics like the number of GitHub stars or commits are automatically computed and kept up to date in the manuscript. It's also possible to do this with visualizations.
Happy to help with any infrastructure and maintenance needs in this project!
This sounds like a good alternative review process. But it does not get to the core problems as to why the current review process is stalled: We have managed to get reviews done, but there had not been any ontologies willing / requesting to undergo a review process. So before redoing the review process, we should 1) define carrots and sticks that make clear to ontology developers why they would want to be reviewed 2) establish when ontologies get re-reviewed - there are multiple ontologies in the foundry that are not conforming to principles, and that should arguably be rectified first 3) reach out to the community.
I've always wanted to do an OBO-wide orthogonality analysis and then compare it to a wider set of ontologies that don't follow OBO principles. I love manubot and think this is a terrific idea!
As for the review process, I think it could be worthwhile to survey the community about what their interests and needs are? It could also be a great community engagement opportunity. Having an expert evaluator advise on any future review processes would also be a great idea. Happy to help with above or community surveying or evaluation practices discussions.
this seems relevant to some more recent discussions about ontology reviews. tagging @wdduncan to make sure he sees it.
Can we close this as not planned?
Closing as not planned; reopen if desired