doathon icon indicating copy to clipboard operation
doathon copied to clipboard

How might we create credible systems of measuring the reputation of research as an alternative to journal title?

Open char-siuu-bao opened this issue 7 years ago • 12 comments

Confused? New to Github? Visit the GitHub help page on our site for more information!

At a glance

Submission name: How might we create credible systems of measuring the reputation of research as an alternative to journal title?

Contact lead: Formerly @jpolka but I will be working on: https://github.com/sparcopen/doathon/issues/58

Issue area: #OpenResearch

Region: #Global

Issue Type: #Challenge

Description

// More info coming soon!

How can others contribute?

// More info coming soon!

end.

This post is part of the OpenCon 2017 Do-A-Thon. Not sure what's going on, head here

char-siuu-bao avatar Oct 26 '17 19:10 char-siuu-bao

It's a great challenge. I wish to join it at least in github discussion. We can highlight at least two different ways to move this point forward. First, we can discuss a system accompanying journal reputation system. Journal reputation can be one of features to assess quality of individual research. Another prospective way - is to discuss a publication system totally independent from journal system. Such a reputation system may also demand a different peer-review system. Just an example of how we can achieve this - is a collaborative post-publish review with automatic assignment of reviewer reliablity score according to their expertise. And the problem of such a system is how to find readers for a paper which is not yet reviewed. It's also questionable whether reputation of a journal is actually important for discoverablity of a paper in the era of google scholar. Do people actually read journals or do they find papers with a search engine. And how readers decide whether they will or won't read a paper found. We need to base goals of a reputation system on actual values moving science forward.

VorontsovIE avatar Nov 04 '17 11:11 VorontsovIE

First of all, terrific challenge, I would love to contribute on GitHub, and probably at the conference as well!

I think journal title might be a genuinely bad predictor of research quality. For example, Nature, Science, and PNAS does not necessarily do any better than other journals on important quality metrics like reproducibility, open materials in published work, and publication bias of significant results. I have more hope for the related popularity metric of citation numbers, but I believe it needs to be one metric for quality among many.

Some interesting alternatives to impact factors and citation rates have been discussed. See for example Fraley and Vazire's idea for a quality metric based on sample size: http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0109019 Another interesting initiative is the badge program started by the Open Science Framework to symbolically reward open science practices, which several journals (at least in psychology) are now starting to endorse: http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1002456

I totally agree that we need to base a reputation system on meaningful values of scientific progress. A challenge of course, is finding metrics that cannot easily be abused/"hacked". I think a plurality of metrics would be a safeguard against this.

pederisager avatar Nov 07 '17 09:11 pederisager

This is a really important challenge/issue in science. My two-cents would be to promote the use of citation metrics (as mentioned above), data re-use metrics (using data-set DOI tracking), and impact statements (self reported statements of a researchers contribution to (open) science). I know the focus of this challenge is to discuss metrics for measuring research reputation, but I wonder if we can open up this discussion to how we can measure overall contribution of a researcher to open science (outreach/education/mentoring etc) rather than focusing on research/publication output/impact. The goal would be to promote a culture where we value and hire greater numbers of well-rounded scientists (good communicators, mentors, educators, researchers), which will encourage researchers to develop skills away from the bench.

SamanthaHindle avatar Nov 08 '17 11:11 SamanthaHindle

@SamanthaHindle Oh, we get one more dimension! We can focus on measuring impact for a long time after publication. Another possibiity is to estimate research quality based on authors' reputation and the field of expertise just after it was published. This aims quite a different goals: to give credits to authors of important papers or to highlight papers which worth reading (may be even for papers at the preprint stage).

VorontsovIE avatar Nov 08 '17 12:11 VorontsovIE

Count me in for both online discussion and in-person!

npscience avatar Nov 09 '17 20:11 npscience

me too! but only online this time.

asbtariq avatar Nov 12 '17 14:11 asbtariq

I'll be working on https://github.com/sparcopen/doathon/issues/58 but happy to jump in on this from time to time!

jpolka avatar Nov 13 '17 10:11 jpolka

Relevant blog post by Björn Brembs

http://bjoern.brembs.net/2016/01/even-without-retractions-top-journals-publish-the-least-reliable-science/

Bubblbu avatar Nov 13 '17 12:11 Bubblbu

We are currently live and can be found at the Goethe hall, in the front, to the left!

pederisager avatar Nov 13 '17 13:11 pederisager

I don't like the impact factor as much as the next person at OpenCon but if we (scientists, funders, reviewers, etc.) insist on having one-number metric per journal as an indication of scientific quality/relevance/interest, I would suggest a more conservative way to calculate the "impact".

For example, because scientists have broadly accepted (sometimes obsess over) the probability cutoff of P<0.05 in statistical analyses, I suggest we approach the impact factor similarly. We could look at the papers at the low end of the distribution in terms of number of citations and see below which citation number lie the lowest 5% of the journal's papers. This number would effectively mean that if you pick out a paper at random, there is 95% chance that the impact will be higher than the said number (for the period one has used for calculating the citation numbers). I think this makes much more sense than using the average. I will post some examples later.

Another thing I'd recommend is to not to restrict the metric to citation numbers in the last 2 years. 5-year and older impact factors are sometimes used but I argue the benefit of a paper often extends much further. A paper with a huge splash in the first six months and virtually no citations after seems much less impressive to me than another that started with a whimper for two years but continues to be relevant after 20 years.

jhk11 avatar Nov 13 '17 14:11 jhk11

We talked about alternative (paper-based) metrics for assessing impact. We mainly addressed this from a psychological scientist's perspective although people from other disciplines contributed as well. We thought about including different measures of openness (data, materials, access,...) as measures of credibility, but also talked about the need to assess quality and/or importance of papers. We spent some time talking about post-publication peer review and the possibility of commenting on and rating papers. That way, we could crowd-source the peer review and reviewers could only review the parts they are "experts" on. We made an attempt to create a questionnaire for researchers from different disciplines to get a "weights" of importance of different concepts.

The concepts/possible questions we came up with so far concern: open data open access open materials open analysis scripts open experimental scripts/procedures open source software used preregistration quality statistical power (maybe)/statcheck check technical quality (are methods suitable for research question?) ...

pederisager avatar Nov 13 '17 16:11 pederisager

The most important challenge assesment. Develop new indicators to be part in the national policy level and also in rankings or other research measures. Credibility and legitimation need to be strong in new methodological indicators

matg20 avatar Nov 14 '17 05:11 matg20