Monday, May 26, 2008

Soft Peer Review: Social Software and Distributed Scientific Evaluation

Soft Peer Review: Social Software and Distributed Scientific Evaluation

Dario TARABORELLI / Department of Psychology / University College London / Gower Street / London / WC1 6BT / United Kingdom /


The debate on the prospects of peer-review in the Internet age and the increasing criticism leveled against the dominant role of impact factor indicators are calling for new measurable criteria to assess scientific quality. Usage-based metrics offer a new avenue to scientific quality assessment but face the same risks as first generation search engines that used unreliable metrics (such as raw traffic data) to estimate content quality. In this article I analyze the contribution that social bookmarking systems can provide to the problem of usage-based metrics for scientific evaluation. I suggest that collaboratively aggregated metadata may help fill the gap between traditional citation-based criteria and raw usage factors. I submit that bottom-up, distributed evaluation models such as those afforded by social bookmarking will challenge more traditional quality assessment models in terms of coverage, efficiency and scalability. Services aggregating user-related quality indicators for online scientific content will come to occupy a key function in the scholarly communication system.

D. Taraborelli (2008), Soft peer review. Social software and distributed scientific evaluation, Proceedings of the 8th International Conference on the Design of Cooperative Systems (COOP 08), Carry-Le-Rouet, France, May 20-23, 2008


[1] Revolutionizing peer review? Nat Neurosci, 8(4):397–397, April 2005. doi: 10.1038/nn0405397. URL

[2] Peer review and fraud. Nature, 444(7122):971–972, December 2006. doi: 10.1038/444971b. URL

[3] The impact factor game. PLoS Medicine, 3(6), June 2006. doi: 10.1371/journal. pmed.0030291. URL

[4] S. Bao, G. Xue, X. Wu, Y. Yu, B. Fei, and Z. Su. Optimizing web search using social annotations. In WWW ’07: Proceedings of the 16th international conference on World Wide Web, pages 501–510, New York, NY, USA, 2007. ACM Press. ISBN 9781595936547. doi: 10.1145/1242572.1242640. URL

[5] J. Bollen, H. Van de Sompel, J. A. Smith, and R. Luce. Toward alternative metrics of journal impact: A comparison of download and citation data. Information Processing & Management, 41(6):1419–1440, December 2005. doi: 10.1016/j.ipm.2005.03.024. URL

[6] T. Brody, S. Harnad, and L. Carr. Earlier Web usage statistics as predictors of later citation impact. J. Am. Soc. Inf. Sci. Technol., 57(8):1060–1072, June 2006. ISSN 1532-2882. doi: 10.1002/asi.v57:8. URL

[7] E. Garfield. The agony and the ecstasy— the history and meaning of the journal impact factor. In International Congress on Peer Review And Biomedical Publication, Chicago, September 2005. URL

[8] P. Ginsparg. Can peer review be better focused. Science & Technology Libraries, 22 (3-4):5–17, January 2004. doi: 10.1300/J122v22n03 02. URL

[9] W. Glanzel. Journal impact measures in bibliometric research. Scientometrics, 53(2): 171–193, 2002. URL

[10] S. Greaves, J. Scott, M. Clarke, L. Miller, T. Hannay, A. Thomas, and P. Campbell. Nature’s trial of open peer review. Nature, December 2006. doi: 10.1038/nature05535. URL

[11] S. Harnad. Open access scientometrics and the uk research assessment exercise. In D. Torres-Salinas and H. F. Moed, editors, 11th Annual Meeting of the International Society for Scientometrics and Informetrics, volume 11, pages 27–33, 2007. URL

[12] S. Harnad. Implementing Peer Review on the Net: Scientific Quality Control in Scholarly Electronic Journals, pages 103–118. MIT Press, 1996. URL

[13] C. Heintz. Web search engines and distributed assessment systems. Pragmatics & Cognition, 14(2):387–409, 2006.

[14] C. G. Jennings. Quality and value: The true purpose of peer review. Nature, 2006. doi: 10.1038/nature05032. URL

[15] M. Jensen. The new metrics of scholarly authority. The Chronicle, June 2007. URL

[16] G. McKiernan. Peer review in the internet age: Five (5) easy pieces. Against the Grain, 16(3):52–55, June 2004. URL


[17] H. Roosendaal and P. Geurts. Forces and functions in scientific communication. In Cooperative Research Information Systems in Physics, Oldenburg, Germany, August 1997. URL

[18] P. T. Shepherd. Final report on the investigation into the feasibility of developing and implementing journal usage factors. Technical report, United Kingdom Serials Group, May 2007. URL

[19] Y. Yanbe, A. Jatowt, S. Nakamura, and K. Tanaka. Can social bookmarking enhance search in the web? In JCDL ’07: Proceedings of the 2007 conference on Digital libraries, pages 107–116, New York, NY, USA, 2007. ACM Press. ISBN 9781595936448. doi: 10.1145/1255175.1255198. URL


peer review; rating; impact factor; citation analysis; usage factors; scholarly publishing; social bookmarking; collaborative annotation; online reference managers; social software; web 2.0; tagging; folksonomy

* This paper is based on ideas previously published on a post on the Academic Productivity blog.


PDF of Presentation Slides Available


1 comment:

Rich Apodaca said...

Gerry, very interesting idea and prediction.

How might blogs figure into this?

Blogging software combined with Google give unprecedented power to anybody with something interesting to say. Science is not immune to this trend.

We're rapidly approaching a point at which the scientific publisher, in the traditional sense, becomes irrelevant.

At that point Google Page Rank will matter a whole lot more than Impact Factor. And open scientific information aggregators, possibly relying on social networking phenomena such as tagging, would play a very important role in that.