Thursday, April 19, 2012

Post-Publication Peer Review: What Value Do Usage-Based Metrics Offer?

Posted by David Crotty ⋅ Apr 19, 2012⋅ 13 Comments

A PLoS ONE article recently went viral, hitting the front page of Reddit and garnering an amazing amount of reader interest. This was great news for the journal and the paper’s authors, but raises questions for the notion of post-publication peer review.

As Kent Anderson recently discussed, the idea of post-publication peer review is nothing new — it’s called “science”. Publication of the paper is an end of one process but the beginning of another. [snip].

The proposed revolution then, is not in the concept, but in the tools available, ways to open that conversation worldwide and to track the life of that paper after it has been published, to better measure its true impact. Despite initial momentum, movement toward implementation of these new technologies seems to have hit a stalling point.


Doing away with pre-publication peer review and replacing it entirely seems to have garnered little support in the research community. F1000 Research will be the biggest test of whether this has any viability. Their approach seems more a strategy meant to increase publisher revenue, rather than to benefit researchers. [snip].


That leaves the search for new metrics (“altmetrics“) as perhaps the greatest hope for near-term improvement in our post-publication understanding of a paper’s value. The Impact Factor is a reasonable, if flawed measurement of a journal, but a terrible method for measuring the quality of work in individual papers or from individual researchers. [snip]


Metrics based on social media coverage of an article tell us more about the author’s ability to network than about their actual experiments. Metrics based on article usage are even harder to interpret as they offer information on reader interest and subject popularity, rather than quality of the article itself.


For the mainstream of science journals, usage based metrics don’t seem to offer the much-desired replacement for the Impact Factor. There is value in understanding the interest drawn by research, but that value is not the same as measuring the quality of that research.

So far we’re mining all the easy and obvious metrics we can find. But they don’t offer us the information we really need. Until better metrics that truly deliver meaningful data on impact are offered, the altmetrics approach is in danger of stalling out. This points to a major crossroads for the field.

Like so many new technologies, there’s an initial rush of enthusiasm as we think about how it could fit with scholarly publishing. But then we hit a point where the easy and obvious approaches are exhausted without much return. Now the hard work begins.

Source and Fulltext Available At 


No comments: